00:00:00.000 Started by upstream project "autotest-per-patch" build number 127160 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.105 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.108 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.111 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.141 Fetching changes from the remote Git repository 00:00:00.143 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.230 > git --version # 'git version 2.39.2' 00:00:00.230 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.262 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.262 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.480 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.490 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.501 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:06.501 > git config core.sparsecheckout # timeout=10 00:00:06.511 > git read-tree -mu HEAD # timeout=10 00:00:06.530 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:06.549 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:06.549 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:06.629 [Pipeline] Start of Pipeline 00:00:06.644 [Pipeline] library 00:00:06.645 Loading library shm_lib@master 00:00:06.645 Library shm_lib@master is cached. Copying from home. 00:00:06.662 [Pipeline] node 00:00:06.668 Running on CYP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.669 [Pipeline] { 00:00:06.676 [Pipeline] catchError 00:00:06.677 [Pipeline] { 00:00:06.690 [Pipeline] wrap 00:00:06.700 [Pipeline] { 00:00:06.706 [Pipeline] stage 00:00:06.707 [Pipeline] { (Prologue) 00:00:06.891 [Pipeline] sh 00:00:07.172 + logger -p user.info -t JENKINS-CI 00:00:07.186 [Pipeline] echo 00:00:07.188 Node: CYP6 00:00:07.193 [Pipeline] sh 00:00:07.490 [Pipeline] setCustomBuildProperty 00:00:07.499 [Pipeline] echo 00:00:07.500 Cleanup processes 00:00:07.503 [Pipeline] sh 00:00:07.798 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.798 92773 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.811 [Pipeline] sh 00:00:08.093 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.093 ++ grep -v 'sudo pgrep' 00:00:08.093 ++ awk '{print $1}' 00:00:08.093 + sudo kill -9 00:00:08.093 + true 00:00:08.110 [Pipeline] cleanWs 00:00:08.120 [WS-CLEANUP] Deleting project workspace... 00:00:08.120 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.126 [WS-CLEANUP] done 00:00:08.131 [Pipeline] setCustomBuildProperty 00:00:08.148 [Pipeline] sh 00:00:08.433 + sudo git config --global --replace-all safe.directory '*' 00:00:08.520 [Pipeline] httpRequest 00:00:08.554 [Pipeline] echo 00:00:08.555 Sorcerer 10.211.164.101 is alive 00:00:08.561 [Pipeline] httpRequest 00:00:08.564 HttpMethod: GET 00:00:08.565 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:08.565 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:08.598 Response Code: HTTP/1.1 200 OK 00:00:08.598 Success: Status code 200 is in the accepted range: 200,404 00:00:08.599 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:32.844 [Pipeline] sh 00:00:33.129 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:33.145 [Pipeline] httpRequest 00:00:33.173 [Pipeline] echo 00:00:33.175 Sorcerer 10.211.164.101 is alive 00:00:33.183 [Pipeline] httpRequest 00:00:33.188 HttpMethod: GET 00:00:33.189 URL: http://10.211.164.101/packages/spdk_8fdaab4b1625446f3cf27c1c5e74d12aa1c05419.tar.gz 00:00:33.190 Sending request to url: http://10.211.164.101/packages/spdk_8fdaab4b1625446f3cf27c1c5e74d12aa1c05419.tar.gz 00:00:33.213 Response Code: HTTP/1.1 200 OK 00:00:33.214 Success: Status code 200 is in the accepted range: 200,404 00:00:33.215 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8fdaab4b1625446f3cf27c1c5e74d12aa1c05419.tar.gz 00:01:10.556 [Pipeline] sh 00:01:10.839 + tar --no-same-owner -xf spdk_8fdaab4b1625446f3cf27c1c5e74d12aa1c05419.tar.gz 00:01:14.151 [Pipeline] sh 00:01:14.432 + git -C spdk log --oneline -n5 00:01:14.432 8fdaab4b1 lib/reduce: if memory allocation fails, g_vol_count--. 00:01:14.432 c5d7cded4 bdev/compress: print error code information in load compress bdev 00:01:14.432 58883cba9 bdev/compress: release reduce vol resource when comp bdev fails to be created. 00:01:14.432 b8378f94e scripts/pkgdep: Set yum's skip_if_unavailable=True under rocky8 00:01:14.432 c2a77f51e module/bdev/nvme: add detach-monitor poller 00:01:14.445 [Pipeline] } 00:01:14.462 [Pipeline] // stage 00:01:14.473 [Pipeline] stage 00:01:14.475 [Pipeline] { (Prepare) 00:01:14.493 [Pipeline] writeFile 00:01:14.510 [Pipeline] sh 00:01:14.796 + logger -p user.info -t JENKINS-CI 00:01:14.810 [Pipeline] sh 00:01:15.096 + logger -p user.info -t JENKINS-CI 00:01:15.108 [Pipeline] sh 00:01:15.393 + cat autorun-spdk.conf 00:01:15.393 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.393 SPDK_TEST_NVMF=1 00:01:15.393 SPDK_TEST_NVME_CLI=1 00:01:15.393 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.393 SPDK_TEST_NVMF_NICS=e810 00:01:15.393 SPDK_TEST_VFIOUSER=1 00:01:15.393 SPDK_RUN_UBSAN=1 00:01:15.393 NET_TYPE=phy 00:01:15.402 RUN_NIGHTLY=0 00:01:15.407 [Pipeline] readFile 00:01:15.433 [Pipeline] withEnv 00:01:15.435 [Pipeline] { 00:01:15.449 [Pipeline] sh 00:01:15.735 + set -ex 00:01:15.735 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:15.735 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.735 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.735 ++ SPDK_TEST_NVMF=1 00:01:15.735 ++ SPDK_TEST_NVME_CLI=1 00:01:15.735 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.735 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.735 ++ SPDK_TEST_VFIOUSER=1 00:01:15.735 ++ SPDK_RUN_UBSAN=1 00:01:15.735 ++ NET_TYPE=phy 00:01:15.735 ++ RUN_NIGHTLY=0 00:01:15.735 + case $SPDK_TEST_NVMF_NICS in 00:01:15.735 + DRIVERS=ice 00:01:15.735 + [[ tcp == \r\d\m\a ]] 00:01:15.735 + [[ -n ice ]] 00:01:15.735 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:15.735 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:15.735 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:15.735 rmmod: ERROR: Module irdma is not currently loaded 00:01:15.735 rmmod: ERROR: Module i40iw is not currently loaded 00:01:15.735 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:15.735 + true 00:01:15.735 + for D in $DRIVERS 00:01:15.735 + sudo modprobe ice 00:01:15.735 + exit 0 00:01:15.746 [Pipeline] } 00:01:15.764 [Pipeline] // withEnv 00:01:15.769 [Pipeline] } 00:01:15.786 [Pipeline] // stage 00:01:15.796 [Pipeline] catchError 00:01:15.797 [Pipeline] { 00:01:15.813 [Pipeline] timeout 00:01:15.813 Timeout set to expire in 50 min 00:01:15.815 [Pipeline] { 00:01:15.831 [Pipeline] stage 00:01:15.833 [Pipeline] { (Tests) 00:01:15.850 [Pipeline] sh 00:01:16.138 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.138 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.138 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.138 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:16.138 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.138 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.138 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:16.139 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.139 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.139 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.139 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:16.139 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.139 + source /etc/os-release 00:01:16.139 ++ NAME='Fedora Linux' 00:01:16.139 ++ VERSION='38 (Cloud Edition)' 00:01:16.139 ++ ID=fedora 00:01:16.139 ++ VERSION_ID=38 00:01:16.139 ++ VERSION_CODENAME= 00:01:16.139 ++ PLATFORM_ID=platform:f38 00:01:16.139 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:16.139 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:16.139 ++ LOGO=fedora-logo-icon 00:01:16.139 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:16.139 ++ HOME_URL=https://fedoraproject.org/ 00:01:16.139 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:16.139 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:16.139 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:16.139 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:16.139 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:16.139 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:16.139 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:16.139 ++ SUPPORT_END=2024-05-14 00:01:16.139 ++ VARIANT='Cloud Edition' 00:01:16.139 ++ VARIANT_ID=cloud 00:01:16.139 + uname -a 00:01:16.139 Linux spdk-CYP-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:16.139 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:19.437 Hugepages 00:01:19.437 node hugesize free / total 00:01:19.437 node0 1048576kB 0 / 0 00:01:19.437 node0 2048kB 0 / 0 00:01:19.697 node1 1048576kB 0 / 0 00:01:19.697 node1 2048kB 0 / 0 00:01:19.697 00:01:19.697 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.697 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:19.697 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:19.697 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:19.697 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:19.697 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:19.697 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:19.697 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:19.697 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:19.697 NVMe 0000:65:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:19.697 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:19.697 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:19.697 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:19.697 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:19.697 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:19.697 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:19.697 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:19.697 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:19.697 + rm -f /tmp/spdk-ld-path 00:01:19.697 + source autorun-spdk.conf 00:01:19.697 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.697 ++ SPDK_TEST_NVMF=1 00:01:19.697 ++ SPDK_TEST_NVME_CLI=1 00:01:19.697 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.697 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.697 ++ SPDK_TEST_VFIOUSER=1 00:01:19.697 ++ SPDK_RUN_UBSAN=1 00:01:19.697 ++ NET_TYPE=phy 00:01:19.697 ++ RUN_NIGHTLY=0 00:01:19.697 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.697 + [[ -n '' ]] 00:01:19.697 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.697 + for M in /var/spdk/build-*-manifest.txt 00:01:19.697 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.697 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.697 + for M in /var/spdk/build-*-manifest.txt 00:01:19.697 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.697 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.698 ++ uname 00:01:19.698 + [[ Linux == \L\i\n\u\x ]] 00:01:19.698 + sudo dmesg -T 00:01:19.957 + sudo dmesg --clear 00:01:19.957 + dmesg_pid=93881 00:01:19.957 + [[ Fedora Linux == FreeBSD ]] 00:01:19.957 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.957 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.957 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.957 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.957 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.957 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.957 + sudo dmesg -Tw 00:01:19.957 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.957 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.957 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.957 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.957 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.957 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.957 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.957 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.957 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.957 Test configuration: 00:01:19.957 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.957 SPDK_TEST_NVMF=1 00:01:19.957 SPDK_TEST_NVME_CLI=1 00:01:19.957 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.957 SPDK_TEST_NVMF_NICS=e810 00:01:19.957 SPDK_TEST_VFIOUSER=1 00:01:19.957 SPDK_RUN_UBSAN=1 00:01:19.957 NET_TYPE=phy 00:01:19.957 RUN_NIGHTLY=0 12:13:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:19.957 12:13:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.957 12:13:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.957 12:13:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.958 12:13:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.958 12:13:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.958 12:13:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.958 12:13:53 -- paths/export.sh@5 -- $ export PATH 00:01:19.958 12:13:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.958 12:13:53 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:19.958 12:13:53 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:19.958 12:13:53 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721902433.XXXXXX 00:01:19.958 12:13:53 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721902433.6F4E4N 00:01:19.958 12:13:53 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:19.958 12:13:53 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:19.958 12:13:53 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:19.958 12:13:53 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:19.958 12:13:53 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.958 12:13:53 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:19.958 12:13:53 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:19.958 12:13:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.958 12:13:53 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:19.958 12:13:53 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:19.958 12:13:53 -- pm/common@17 -- $ local monitor 00:01:19.958 12:13:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.958 12:13:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.958 12:13:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.958 12:13:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.958 12:13:53 -- pm/common@25 -- $ sleep 1 00:01:19.958 12:13:53 -- pm/common@21 -- $ date +%s 00:01:19.958 12:13:53 -- pm/common@21 -- $ date +%s 00:01:19.958 12:13:53 -- pm/common@21 -- $ date +%s 00:01:19.958 12:13:53 -- pm/common@21 -- $ date +%s 00:01:19.958 12:13:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721902433 00:01:19.958 12:13:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721902433 00:01:19.958 12:13:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721902433 00:01:19.958 12:13:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721902433 00:01:19.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721902433_collect-vmstat.pm.log 00:01:19.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721902433_collect-cpu-load.pm.log 00:01:19.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721902433_collect-cpu-temp.pm.log 00:01:19.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721902433_collect-bmc-pm.bmc.pm.log 00:01:20.899 12:13:54 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:20.899 12:13:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.899 12:13:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.899 12:13:54 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.899 12:13:54 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.899 Thu Jul 25 10:13:54 AM UTC 2024 00:01:20.899 12:13:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.160 v24.09-pre-305-g8fdaab4b1 00:01:21.160 12:13:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:21.160 12:13:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.160 12:13:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.160 12:13:54 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:21.160 12:13:54 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:21.160 12:13:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.160 ************************************ 00:01:21.160 START TEST ubsan 00:01:21.160 ************************************ 00:01:21.160 12:13:54 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:21.160 using ubsan 00:01:21.160 00:01:21.160 real 0m0.000s 00:01:21.160 user 0m0.000s 00:01:21.160 sys 0m0.000s 00:01:21.160 12:13:54 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:21.160 12:13:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.160 ************************************ 00:01:21.160 END TEST ubsan 00:01:21.160 ************************************ 00:01:21.160 12:13:54 -- common/autotest_common.sh@1142 -- $ return 0 00:01:21.160 12:13:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.160 12:13:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.160 12:13:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.160 12:13:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.160 12:13:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.160 12:13:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.160 12:13:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.160 12:13:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.160 12:13:54 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:21.420 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:21.420 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:21.680 Using 'verbs' RDMA provider 00:01:37.580 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:49.882 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:49.882 Creating mk/config.mk...done. 00:01:49.882 Creating mk/cc.flags.mk...done. 00:01:49.882 Type 'make' to build. 00:01:49.882 12:14:23 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:01:49.882 12:14:23 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:49.882 12:14:23 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:49.882 12:14:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.882 ************************************ 00:01:49.882 START TEST make 00:01:49.882 ************************************ 00:01:49.882 12:14:23 make -- common/autotest_common.sh@1123 -- $ make -j128 00:01:50.142 make[1]: Nothing to be done for 'all'. 00:01:51.523 The Meson build system 00:01:51.523 Version: 1.3.1 00:01:51.523 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:51.523 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:51.523 Build type: native build 00:01:51.523 Project name: libvfio-user 00:01:51.523 Project version: 0.0.1 00:01:51.523 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:51.523 C linker for the host machine: cc ld.bfd 2.39-16 00:01:51.523 Host machine cpu family: x86_64 00:01:51.523 Host machine cpu: x86_64 00:01:51.523 Run-time dependency threads found: YES 00:01:51.523 Library dl found: YES 00:01:51.523 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:51.523 Run-time dependency json-c found: YES 0.17 00:01:51.523 Run-time dependency cmocka found: YES 1.1.7 00:01:51.523 Program pytest-3 found: NO 00:01:51.523 Program flake8 found: NO 00:01:51.523 Program misspell-fixer found: NO 00:01:51.523 Program restructuredtext-lint found: NO 00:01:51.523 Program valgrind found: YES (/usr/bin/valgrind) 00:01:51.523 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.523 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.523 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.523 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:51.523 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:51.523 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:51.523 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:51.523 Build targets in project: 8 00:01:51.523 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:51.523 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:51.523 00:01:51.523 libvfio-user 0.0.1 00:01:51.523 00:01:51.523 User defined options 00:01:51.523 buildtype : debug 00:01:51.523 default_library: shared 00:01:51.523 libdir : /usr/local/lib 00:01:51.523 00:01:51.523 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.092 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:52.092 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:52.092 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:52.092 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:52.092 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:52.092 [5/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:52.092 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:52.092 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:52.092 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:52.092 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:52.092 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:52.092 [11/37] Compiling C object samples/null.p/null.c.o 00:01:52.092 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:52.092 [13/37] Compiling C object samples/server.p/server.c.o 00:01:52.092 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:52.092 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:52.092 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:52.352 [17/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:52.352 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:52.352 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:52.352 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:52.352 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:52.352 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:52.352 [23/37] Compiling C object samples/client.p/client.c.o 00:01:52.352 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:52.352 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:52.352 [26/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:52.352 [27/37] Linking target samples/client 00:01:52.352 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:52.352 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:52.352 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:52.352 [31/37] Linking target test/unit_tests 00:01:52.611 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:52.611 [33/37] Linking target samples/server 00:01:52.611 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:52.611 [35/37] Linking target samples/null 00:01:52.611 [36/37] Linking target samples/gpio-pci-idio-16 00:01:52.611 [37/37] Linking target samples/lspci 00:01:52.611 INFO: autodetecting backend as ninja 00:01:52.611 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:52.611 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:52.870 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:53.130 ninja: no work to do. 00:01:59.711 The Meson build system 00:01:59.711 Version: 1.3.1 00:01:59.711 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:59.711 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:59.711 Build type: native build 00:01:59.711 Program cat found: YES (/usr/bin/cat) 00:01:59.711 Project name: DPDK 00:01:59.711 Project version: 24.03.0 00:01:59.711 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:59.711 C linker for the host machine: cc ld.bfd 2.39-16 00:01:59.711 Host machine cpu family: x86_64 00:01:59.711 Host machine cpu: x86_64 00:01:59.711 Message: ## Building in Developer Mode ## 00:01:59.711 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.711 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:59.711 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.711 Program python3 found: YES (/usr/bin/python3) 00:01:59.711 Program cat found: YES (/usr/bin/cat) 00:01:59.711 Compiler for C supports arguments -march=native: YES 00:01:59.711 Checking for size of "void *" : 8 00:01:59.711 Checking for size of "void *" : 8 (cached) 00:01:59.711 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:59.711 Library m found: YES 00:01:59.711 Library numa found: YES 00:01:59.711 Has header "numaif.h" : YES 00:01:59.711 Library fdt found: NO 00:01:59.711 Library execinfo found: NO 00:01:59.711 Has header "execinfo.h" : YES 00:01:59.711 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.711 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.711 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.711 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.711 Run-time dependency openssl found: YES 3.0.9 00:01:59.711 Run-time dependency libpcap found: YES 1.10.4 00:01:59.711 Has header "pcap.h" with dependency libpcap: YES 00:01:59.711 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.711 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.711 Compiler for C supports arguments -Wformat: YES 00:01:59.711 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.711 Compiler for C supports arguments -Wformat-security: NO 00:01:59.711 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.711 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.712 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.712 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.712 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.712 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.712 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.712 Compiler for C supports arguments -Wundef: YES 00:01:59.712 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.712 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.712 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.712 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.712 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.712 Program objdump found: YES (/usr/bin/objdump) 00:01:59.712 Compiler for C supports arguments -mavx512f: YES 00:01:59.712 Checking if "AVX512 checking" compiles: YES 00:01:59.712 Fetching value of define "__SSE4_2__" : 1 00:01:59.712 Fetching value of define "__AES__" : 1 00:01:59.712 Fetching value of define "__AVX__" : 1 00:01:59.712 Fetching value of define "__AVX2__" : 1 00:01:59.712 Fetching value of define "__AVX512BW__" : 1 00:01:59.712 Fetching value of define "__AVX512CD__" : 1 00:01:59.712 Fetching value of define "__AVX512DQ__" : 1 00:01:59.712 Fetching value of define "__AVX512F__" : 1 00:01:59.712 Fetching value of define "__AVX512VL__" : 1 00:01:59.712 Fetching value of define "__PCLMUL__" : 1 00:01:59.712 Fetching value of define "__RDRND__" : 1 00:01:59.712 Fetching value of define "__RDSEED__" : 1 00:01:59.712 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:59.712 Fetching value of define "__znver1__" : (undefined) 00:01:59.712 Fetching value of define "__znver2__" : (undefined) 00:01:59.712 Fetching value of define "__znver3__" : (undefined) 00:01:59.712 Fetching value of define "__znver4__" : (undefined) 00:01:59.712 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.712 Message: lib/log: Defining dependency "log" 00:01:59.712 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.712 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.712 Checking for function "getentropy" : NO 00:01:59.712 Message: lib/eal: Defining dependency "eal" 00:01:59.712 Message: lib/ring: Defining dependency "ring" 00:01:59.712 Message: lib/rcu: Defining dependency "rcu" 00:01:59.712 Message: lib/mempool: Defining dependency "mempool" 00:01:59.712 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.712 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.712 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.712 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.712 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.712 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.712 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:59.712 Compiler for C supports arguments -mpclmul: YES 00:01:59.712 Compiler for C supports arguments -maes: YES 00:01:59.712 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.712 Compiler for C supports arguments -mavx512bw: YES 00:01:59.712 Compiler for C supports arguments -mavx512dq: YES 00:01:59.712 Compiler for C supports arguments -mavx512vl: YES 00:01:59.712 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.712 Compiler for C supports arguments -mavx2: YES 00:01:59.712 Compiler for C supports arguments -mavx: YES 00:01:59.712 Message: lib/net: Defining dependency "net" 00:01:59.712 Message: lib/meter: Defining dependency "meter" 00:01:59.712 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.712 Message: lib/pci: Defining dependency "pci" 00:01:59.712 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.712 Message: lib/hash: Defining dependency "hash" 00:01:59.712 Message: lib/timer: Defining dependency "timer" 00:01:59.712 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.712 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.712 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.712 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.712 Message: lib/power: Defining dependency "power" 00:01:59.712 Message: lib/reorder: Defining dependency "reorder" 00:01:59.712 Message: lib/security: Defining dependency "security" 00:01:59.712 Has header "linux/userfaultfd.h" : YES 00:01:59.712 Has header "linux/vduse.h" : YES 00:01:59.712 Message: lib/vhost: Defining dependency "vhost" 00:01:59.712 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.712 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.712 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.712 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.712 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:59.712 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:59.712 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:59.712 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:59.712 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:59.712 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:59.712 Program doxygen found: YES (/usr/bin/doxygen) 00:01:59.712 Configuring doxy-api-html.conf using configuration 00:01:59.712 Configuring doxy-api-man.conf using configuration 00:01:59.712 Program mandb found: YES (/usr/bin/mandb) 00:01:59.712 Program sphinx-build found: NO 00:01:59.712 Configuring rte_build_config.h using configuration 00:01:59.712 Message: 00:01:59.712 ================= 00:01:59.712 Applications Enabled 00:01:59.712 ================= 00:01:59.712 00:01:59.712 apps: 00:01:59.712 00:01:59.712 00:01:59.712 Message: 00:01:59.712 ================= 00:01:59.712 Libraries Enabled 00:01:59.712 ================= 00:01:59.712 00:01:59.712 libs: 00:01:59.712 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:59.712 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:59.712 cryptodev, dmadev, power, reorder, security, vhost, 00:01:59.712 00:01:59.712 Message: 00:01:59.712 =============== 00:01:59.712 Drivers Enabled 00:01:59.712 =============== 00:01:59.712 00:01:59.712 common: 00:01:59.712 00:01:59.712 bus: 00:01:59.712 pci, vdev, 00:01:59.712 mempool: 00:01:59.712 ring, 00:01:59.712 dma: 00:01:59.712 00:01:59.712 net: 00:01:59.712 00:01:59.712 crypto: 00:01:59.712 00:01:59.712 compress: 00:01:59.712 00:01:59.712 vdpa: 00:01:59.712 00:01:59.712 00:01:59.712 Message: 00:01:59.712 ================= 00:01:59.712 Content Skipped 00:01:59.712 ================= 00:01:59.712 00:01:59.712 apps: 00:01:59.712 dumpcap: explicitly disabled via build config 00:01:59.712 graph: explicitly disabled via build config 00:01:59.712 pdump: explicitly disabled via build config 00:01:59.712 proc-info: explicitly disabled via build config 00:01:59.712 test-acl: explicitly disabled via build config 00:01:59.712 test-bbdev: explicitly disabled via build config 00:01:59.712 test-cmdline: explicitly disabled via build config 00:01:59.712 test-compress-perf: explicitly disabled via build config 00:01:59.712 test-crypto-perf: explicitly disabled via build config 00:01:59.712 test-dma-perf: explicitly disabled via build config 00:01:59.712 test-eventdev: explicitly disabled via build config 00:01:59.712 test-fib: explicitly disabled via build config 00:01:59.712 test-flow-perf: explicitly disabled via build config 00:01:59.712 test-gpudev: explicitly disabled via build config 00:01:59.712 test-mldev: explicitly disabled via build config 00:01:59.712 test-pipeline: explicitly disabled via build config 00:01:59.712 test-pmd: explicitly disabled via build config 00:01:59.712 test-regex: explicitly disabled via build config 00:01:59.712 test-sad: explicitly disabled via build config 00:01:59.712 test-security-perf: explicitly disabled via build config 00:01:59.712 00:01:59.712 libs: 00:01:59.712 argparse: explicitly disabled via build config 00:01:59.712 metrics: explicitly disabled via build config 00:01:59.712 acl: explicitly disabled via build config 00:01:59.712 bbdev: explicitly disabled via build config 00:01:59.712 bitratestats: explicitly disabled via build config 00:01:59.712 bpf: explicitly disabled via build config 00:01:59.712 cfgfile: explicitly disabled via build config 00:01:59.712 distributor: explicitly disabled via build config 00:01:59.712 efd: explicitly disabled via build config 00:01:59.712 eventdev: explicitly disabled via build config 00:01:59.712 dispatcher: explicitly disabled via build config 00:01:59.712 gpudev: explicitly disabled via build config 00:01:59.712 gro: explicitly disabled via build config 00:01:59.712 gso: explicitly disabled via build config 00:01:59.712 ip_frag: explicitly disabled via build config 00:01:59.712 jobstats: explicitly disabled via build config 00:01:59.712 latencystats: explicitly disabled via build config 00:01:59.712 lpm: explicitly disabled via build config 00:01:59.712 member: explicitly disabled via build config 00:01:59.712 pcapng: explicitly disabled via build config 00:01:59.712 rawdev: explicitly disabled via build config 00:01:59.712 regexdev: explicitly disabled via build config 00:01:59.712 mldev: explicitly disabled via build config 00:01:59.712 rib: explicitly disabled via build config 00:01:59.712 sched: explicitly disabled via build config 00:01:59.712 stack: explicitly disabled via build config 00:01:59.712 ipsec: explicitly disabled via build config 00:01:59.712 pdcp: explicitly disabled via build config 00:01:59.712 fib: explicitly disabled via build config 00:01:59.712 port: explicitly disabled via build config 00:01:59.712 pdump: explicitly disabled via build config 00:01:59.712 table: explicitly disabled via build config 00:01:59.712 pipeline: explicitly disabled via build config 00:01:59.712 graph: explicitly disabled via build config 00:01:59.713 node: explicitly disabled via build config 00:01:59.713 00:01:59.713 drivers: 00:01:59.713 common/cpt: not in enabled drivers build config 00:01:59.713 common/dpaax: not in enabled drivers build config 00:01:59.713 common/iavf: not in enabled drivers build config 00:01:59.713 common/idpf: not in enabled drivers build config 00:01:59.713 common/ionic: not in enabled drivers build config 00:01:59.713 common/mvep: not in enabled drivers build config 00:01:59.713 common/octeontx: not in enabled drivers build config 00:01:59.713 bus/auxiliary: not in enabled drivers build config 00:01:59.713 bus/cdx: not in enabled drivers build config 00:01:59.713 bus/dpaa: not in enabled drivers build config 00:01:59.713 bus/fslmc: not in enabled drivers build config 00:01:59.713 bus/ifpga: not in enabled drivers build config 00:01:59.713 bus/platform: not in enabled drivers build config 00:01:59.713 bus/uacce: not in enabled drivers build config 00:01:59.713 bus/vmbus: not in enabled drivers build config 00:01:59.713 common/cnxk: not in enabled drivers build config 00:01:59.713 common/mlx5: not in enabled drivers build config 00:01:59.713 common/nfp: not in enabled drivers build config 00:01:59.713 common/nitrox: not in enabled drivers build config 00:01:59.713 common/qat: not in enabled drivers build config 00:01:59.713 common/sfc_efx: not in enabled drivers build config 00:01:59.713 mempool/bucket: not in enabled drivers build config 00:01:59.713 mempool/cnxk: not in enabled drivers build config 00:01:59.713 mempool/dpaa: not in enabled drivers build config 00:01:59.713 mempool/dpaa2: not in enabled drivers build config 00:01:59.713 mempool/octeontx: not in enabled drivers build config 00:01:59.713 mempool/stack: not in enabled drivers build config 00:01:59.713 dma/cnxk: not in enabled drivers build config 00:01:59.713 dma/dpaa: not in enabled drivers build config 00:01:59.713 dma/dpaa2: not in enabled drivers build config 00:01:59.713 dma/hisilicon: not in enabled drivers build config 00:01:59.713 dma/idxd: not in enabled drivers build config 00:01:59.713 dma/ioat: not in enabled drivers build config 00:01:59.713 dma/skeleton: not in enabled drivers build config 00:01:59.713 net/af_packet: not in enabled drivers build config 00:01:59.713 net/af_xdp: not in enabled drivers build config 00:01:59.713 net/ark: not in enabled drivers build config 00:01:59.713 net/atlantic: not in enabled drivers build config 00:01:59.713 net/avp: not in enabled drivers build config 00:01:59.713 net/axgbe: not in enabled drivers build config 00:01:59.713 net/bnx2x: not in enabled drivers build config 00:01:59.713 net/bnxt: not in enabled drivers build config 00:01:59.713 net/bonding: not in enabled drivers build config 00:01:59.713 net/cnxk: not in enabled drivers build config 00:01:59.713 net/cpfl: not in enabled drivers build config 00:01:59.713 net/cxgbe: not in enabled drivers build config 00:01:59.713 net/dpaa: not in enabled drivers build config 00:01:59.713 net/dpaa2: not in enabled drivers build config 00:01:59.713 net/e1000: not in enabled drivers build config 00:01:59.713 net/ena: not in enabled drivers build config 00:01:59.713 net/enetc: not in enabled drivers build config 00:01:59.713 net/enetfec: not in enabled drivers build config 00:01:59.713 net/enic: not in enabled drivers build config 00:01:59.713 net/failsafe: not in enabled drivers build config 00:01:59.713 net/fm10k: not in enabled drivers build config 00:01:59.713 net/gve: not in enabled drivers build config 00:01:59.713 net/hinic: not in enabled drivers build config 00:01:59.713 net/hns3: not in enabled drivers build config 00:01:59.713 net/i40e: not in enabled drivers build config 00:01:59.713 net/iavf: not in enabled drivers build config 00:01:59.713 net/ice: not in enabled drivers build config 00:01:59.713 net/idpf: not in enabled drivers build config 00:01:59.713 net/igc: not in enabled drivers build config 00:01:59.713 net/ionic: not in enabled drivers build config 00:01:59.713 net/ipn3ke: not in enabled drivers build config 00:01:59.713 net/ixgbe: not in enabled drivers build config 00:01:59.713 net/mana: not in enabled drivers build config 00:01:59.713 net/memif: not in enabled drivers build config 00:01:59.713 net/mlx4: not in enabled drivers build config 00:01:59.713 net/mlx5: not in enabled drivers build config 00:01:59.713 net/mvneta: not in enabled drivers build config 00:01:59.713 net/mvpp2: not in enabled drivers build config 00:01:59.713 net/netvsc: not in enabled drivers build config 00:01:59.713 net/nfb: not in enabled drivers build config 00:01:59.713 net/nfp: not in enabled drivers build config 00:01:59.713 net/ngbe: not in enabled drivers build config 00:01:59.713 net/null: not in enabled drivers build config 00:01:59.713 net/octeontx: not in enabled drivers build config 00:01:59.713 net/octeon_ep: not in enabled drivers build config 00:01:59.713 net/pcap: not in enabled drivers build config 00:01:59.713 net/pfe: not in enabled drivers build config 00:01:59.713 net/qede: not in enabled drivers build config 00:01:59.713 net/ring: not in enabled drivers build config 00:01:59.713 net/sfc: not in enabled drivers build config 00:01:59.713 net/softnic: not in enabled drivers build config 00:01:59.713 net/tap: not in enabled drivers build config 00:01:59.713 net/thunderx: not in enabled drivers build config 00:01:59.713 net/txgbe: not in enabled drivers build config 00:01:59.713 net/vdev_netvsc: not in enabled drivers build config 00:01:59.713 net/vhost: not in enabled drivers build config 00:01:59.713 net/virtio: not in enabled drivers build config 00:01:59.713 net/vmxnet3: not in enabled drivers build config 00:01:59.713 raw/*: missing internal dependency, "rawdev" 00:01:59.713 crypto/armv8: not in enabled drivers build config 00:01:59.713 crypto/bcmfs: not in enabled drivers build config 00:01:59.713 crypto/caam_jr: not in enabled drivers build config 00:01:59.713 crypto/ccp: not in enabled drivers build config 00:01:59.713 crypto/cnxk: not in enabled drivers build config 00:01:59.713 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.713 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.713 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.713 crypto/mlx5: not in enabled drivers build config 00:01:59.713 crypto/mvsam: not in enabled drivers build config 00:01:59.713 crypto/nitrox: not in enabled drivers build config 00:01:59.713 crypto/null: not in enabled drivers build config 00:01:59.713 crypto/octeontx: not in enabled drivers build config 00:01:59.713 crypto/openssl: not in enabled drivers build config 00:01:59.713 crypto/scheduler: not in enabled drivers build config 00:01:59.713 crypto/uadk: not in enabled drivers build config 00:01:59.713 crypto/virtio: not in enabled drivers build config 00:01:59.713 compress/isal: not in enabled drivers build config 00:01:59.713 compress/mlx5: not in enabled drivers build config 00:01:59.713 compress/nitrox: not in enabled drivers build config 00:01:59.713 compress/octeontx: not in enabled drivers build config 00:01:59.713 compress/zlib: not in enabled drivers build config 00:01:59.713 regex/*: missing internal dependency, "regexdev" 00:01:59.713 ml/*: missing internal dependency, "mldev" 00:01:59.713 vdpa/ifc: not in enabled drivers build config 00:01:59.713 vdpa/mlx5: not in enabled drivers build config 00:01:59.713 vdpa/nfp: not in enabled drivers build config 00:01:59.713 vdpa/sfc: not in enabled drivers build config 00:01:59.713 event/*: missing internal dependency, "eventdev" 00:01:59.713 baseband/*: missing internal dependency, "bbdev" 00:01:59.713 gpu/*: missing internal dependency, "gpudev" 00:01:59.713 00:01:59.713 00:01:59.713 Build targets in project: 84 00:01:59.713 00:01:59.713 DPDK 24.03.0 00:01:59.713 00:01:59.713 User defined options 00:01:59.713 buildtype : debug 00:01:59.713 default_library : shared 00:01:59.713 libdir : lib 00:01:59.713 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:59.713 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:59.713 c_link_args : 00:01:59.713 cpu_instruction_set: native 00:01:59.713 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:59.713 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:59.713 enable_docs : false 00:01:59.713 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.713 enable_kmods : false 00:01:59.713 max_lcores : 128 00:01:59.713 tests : false 00:01:59.713 00:01:59.713 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.713 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:59.713 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:59.713 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:59.713 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.713 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:59.713 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:59.713 [6/267] Linking static target lib/librte_kvargs.a 00:01:59.713 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:59.713 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:59.713 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:59.713 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:59.713 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:59.713 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:59.713 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:59.713 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:59.713 [15/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:59.714 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:59.714 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:59.714 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:59.714 [19/267] Linking static target lib/librte_log.a 00:01:59.714 [20/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:59.714 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:59.714 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:59.714 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:59.714 [24/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:59.714 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:59.714 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:59.714 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:59.714 [28/267] Linking static target lib/librte_pci.a 00:01:59.714 [29/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:59.714 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:59.714 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:59.714 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:59.714 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:59.714 [34/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:59.971 [35/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:59.971 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:59.971 [37/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:59.971 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:59.971 [39/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:59.971 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:59.971 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:59.971 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:59.971 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:59.971 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:59.971 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:59.971 [46/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:59.971 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:59.971 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:59.971 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:59.971 [50/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:59.971 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:59.971 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:59.971 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:59.971 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:59.971 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:59.971 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:59.971 [57/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:59.971 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:59.971 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:59.971 [60/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:59.971 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:59.971 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:59.971 [63/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:59.971 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:59.971 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:59.971 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:59.972 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:59.972 [68/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:59.972 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:59.972 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:59.972 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:59.972 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:59.972 [73/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.972 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:59.972 [75/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:59.972 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:59.972 [77/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:59.972 [78/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:59.972 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:59.972 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:59.972 [81/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.972 [82/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:59.972 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:59.972 [84/267] Linking static target lib/librte_ring.a 00:01:59.972 [85/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:59.972 [86/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:59.972 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:59.972 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:59.972 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:59.972 [90/267] Linking static target lib/librte_meter.a 00:01:59.972 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:59.972 [92/267] Linking static target lib/librte_telemetry.a 00:02:00.230 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.230 [94/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.230 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:00.230 [96/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:00.230 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:00.230 [98/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:00.230 [99/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:00.230 [100/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:00.230 [101/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:00.230 [102/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:00.230 [103/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:00.230 [104/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:00.230 [105/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:00.230 [106/267] Linking static target lib/librte_cmdline.a 00:02:00.230 [107/267] Linking static target lib/librte_timer.a 00:02:00.230 [108/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:00.230 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:00.230 [110/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:00.230 [111/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:00.230 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:00.230 [113/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:00.230 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:00.230 [115/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:00.230 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.230 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.230 [118/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:00.230 [119/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:00.230 [120/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:00.230 [121/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:00.230 [122/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:00.230 [123/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:00.230 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:00.230 [125/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:00.230 [126/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:00.230 [127/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:00.230 [128/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:00.230 [129/267] Linking static target lib/librte_dmadev.a 00:02:00.230 [130/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:00.230 [131/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:00.230 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:00.230 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:00.230 [134/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:00.230 [135/267] Linking static target lib/librte_net.a 00:02:00.230 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:00.230 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:00.230 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:00.230 [139/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:00.230 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:00.230 [141/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:00.230 [142/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:00.230 [143/267] Linking static target lib/librte_rcu.a 00:02:00.230 [144/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:00.230 [145/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:00.230 [146/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:00.230 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:00.230 [148/267] Linking static target lib/librte_compressdev.a 00:02:00.230 [149/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.230 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:00.230 [151/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:00.230 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:00.230 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:00.230 [154/267] Linking static target lib/librte_power.a 00:02:00.230 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:00.230 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:00.230 [157/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:00.230 [158/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:00.230 [159/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.230 [160/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:00.230 [161/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:00.230 [162/267] Linking static target lib/librte_mbuf.a 00:02:00.490 [163/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:00.490 [164/267] Linking static target lib/librte_security.a 00:02:00.490 [165/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.490 [166/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:00.490 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:00.490 [168/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:00.490 [169/267] Linking static target lib/librte_mempool.a 00:02:00.490 [170/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:00.490 [171/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:00.490 [172/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:00.490 [173/267] Linking static target lib/librte_reorder.a 00:02:00.491 [174/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:00.491 [175/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:00.491 [176/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:00.491 [177/267] Linking static target lib/librte_hash.a 00:02:00.491 [178/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.491 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:00.491 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:00.491 [181/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.491 [182/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:00.491 [183/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:00.491 [184/267] Linking target lib/librte_log.so.24.1 00:02:00.491 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:00.491 [186/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.491 [187/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.491 [188/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:00.491 [189/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:00.491 [190/267] Linking static target lib/librte_cryptodev.a 00:02:00.491 [191/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.491 [192/267] Linking static target lib/librte_eal.a 00:02:00.750 [193/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.750 [194/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.750 [195/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.750 [196/267] Linking static target drivers/librte_bus_pci.a 00:02:00.750 [197/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:00.750 [198/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:00.750 [199/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:00.750 [200/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.750 [201/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.750 [202/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.750 [203/267] Linking static target drivers/librte_bus_vdev.a 00:02:00.750 [204/267] Linking target lib/librte_kvargs.so.24.1 00:02:00.750 [205/267] Linking target lib/librte_telemetry.so.24.1 00:02:00.750 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:00.750 [207/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:00.750 [208/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.750 [209/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.750 [210/267] Linking static target drivers/librte_mempool_ring.a 00:02:00.750 [211/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:00.750 [212/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:01.010 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.010 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.010 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.010 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.010 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.010 [218/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.010 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.010 [220/267] Linking static target lib/librte_ethdev.a 00:02:01.271 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.271 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.532 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.532 [224/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.532 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.532 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.103 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:02.103 [228/267] Linking static target lib/librte_vhost.a 00:02:02.674 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.589 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.177 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.559 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.559 [233/267] Linking target lib/librte_eal.so.24.1 00:02:12.820 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:12.820 [235/267] Linking target lib/librte_ring.so.24.1 00:02:12.820 [236/267] Linking target lib/librte_meter.so.24.1 00:02:12.820 [237/267] Linking target lib/librte_timer.so.24.1 00:02:12.820 [238/267] Linking target lib/librte_pci.so.24.1 00:02:12.820 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:12.820 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.081 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:13.081 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:13.081 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:13.081 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:13.081 [245/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:13.342 [246/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:13.342 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:13.342 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:13.342 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:13.342 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:13.603 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:13.603 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:13.603 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:13.603 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:02:13.603 [255/267] Linking target lib/librte_net.so.24.1 00:02:13.603 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:13.603 [257/267] Linking target lib/librte_compressdev.so.24.1 00:02:13.863 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:13.863 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:13.863 [260/267] Linking target lib/librte_security.so.24.1 00:02:13.863 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:13.863 [262/267] Linking target lib/librte_hash.so.24.1 00:02:13.863 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:14.125 [264/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:14.125 [265/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:14.125 [266/267] Linking target lib/librte_power.so.24.1 00:02:14.125 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:14.125 INFO: autodetecting backend as ninja 00:02:14.125 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 128 00:02:15.509 CC lib/ut/ut.o 00:02:15.509 CC lib/ut_mock/mock.o 00:02:15.509 CC lib/log/log.o 00:02:15.509 CC lib/log/log_flags.o 00:02:15.509 CC lib/log/log_deprecated.o 00:02:15.509 LIB libspdk_ut.a 00:02:15.509 LIB libspdk_log.a 00:02:15.509 LIB libspdk_ut_mock.a 00:02:15.509 SO libspdk_ut.so.2.0 00:02:15.509 SO libspdk_log.so.7.0 00:02:15.509 SO libspdk_ut_mock.so.6.0 00:02:15.509 SYMLINK libspdk_ut.so 00:02:15.509 SYMLINK libspdk_ut_mock.so 00:02:15.509 SYMLINK libspdk_log.so 00:02:16.078 CC lib/dma/dma.o 00:02:16.078 CC lib/ioat/ioat.o 00:02:16.078 CXX lib/trace_parser/trace.o 00:02:16.078 CC lib/util/base64.o 00:02:16.078 CC lib/util/bit_array.o 00:02:16.078 CC lib/util/cpuset.o 00:02:16.078 CC lib/util/crc16.o 00:02:16.078 CC lib/util/crc32.o 00:02:16.078 CC lib/util/crc32c.o 00:02:16.078 CC lib/util/crc32_ieee.o 00:02:16.078 CC lib/util/crc64.o 00:02:16.078 CC lib/util/dif.o 00:02:16.078 CC lib/util/fd.o 00:02:16.078 CC lib/util/fd_group.o 00:02:16.078 CC lib/util/file.o 00:02:16.078 CC lib/util/hexlify.o 00:02:16.078 CC lib/util/iov.o 00:02:16.078 CC lib/util/math.o 00:02:16.078 CC lib/util/net.o 00:02:16.078 CC lib/util/pipe.o 00:02:16.078 CC lib/util/strerror_tls.o 00:02:16.078 CC lib/util/string.o 00:02:16.078 CC lib/util/uuid.o 00:02:16.078 CC lib/util/xor.o 00:02:16.078 CC lib/util/zipf.o 00:02:16.078 CC lib/vfio_user/host/vfio_user_pci.o 00:02:16.078 CC lib/vfio_user/host/vfio_user.o 00:02:16.078 LIB libspdk_dma.a 00:02:16.338 SO libspdk_dma.so.4.0 00:02:16.338 LIB libspdk_ioat.a 00:02:16.338 SYMLINK libspdk_dma.so 00:02:16.338 SO libspdk_ioat.so.7.0 00:02:16.338 SYMLINK libspdk_ioat.so 00:02:16.338 LIB libspdk_vfio_user.a 00:02:16.599 SO libspdk_vfio_user.so.5.0 00:02:16.599 SYMLINK libspdk_vfio_user.so 00:02:16.861 LIB libspdk_trace_parser.a 00:02:16.861 SO libspdk_trace_parser.so.5.0 00:02:16.861 SYMLINK libspdk_trace_parser.so 00:02:17.122 LIB libspdk_util.a 00:02:17.122 SO libspdk_util.so.10.0 00:02:17.383 SYMLINK libspdk_util.so 00:02:17.643 CC lib/vmd/vmd.o 00:02:17.643 CC lib/vmd/led.o 00:02:17.643 CC lib/json/json_parse.o 00:02:17.643 CC lib/json/json_util.o 00:02:17.643 CC lib/json/json_write.o 00:02:17.643 CC lib/env_dpdk/env.o 00:02:17.643 CC lib/env_dpdk/memory.o 00:02:17.643 CC lib/env_dpdk/pci.o 00:02:17.643 CC lib/rdma_provider/common.o 00:02:17.643 CC lib/env_dpdk/init.o 00:02:17.643 CC lib/idxd/idxd.o 00:02:17.643 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:17.643 CC lib/env_dpdk/threads.o 00:02:17.643 CC lib/conf/conf.o 00:02:17.643 CC lib/env_dpdk/pci_ioat.o 00:02:17.643 CC lib/rdma_utils/rdma_utils.o 00:02:17.643 CC lib/idxd/idxd_user.o 00:02:17.643 CC lib/env_dpdk/pci_virtio.o 00:02:17.643 CC lib/idxd/idxd_kernel.o 00:02:17.643 CC lib/env_dpdk/pci_vmd.o 00:02:17.643 CC lib/env_dpdk/pci_idxd.o 00:02:17.643 CC lib/env_dpdk/pci_event.o 00:02:17.643 CC lib/env_dpdk/sigbus_handler.o 00:02:17.643 CC lib/env_dpdk/pci_dpdk.o 00:02:17.643 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:17.643 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:17.930 LIB libspdk_conf.a 00:02:18.195 LIB libspdk_rdma_utils.a 00:02:18.195 SO libspdk_conf.so.6.0 00:02:18.195 LIB libspdk_json.a 00:02:18.195 SO libspdk_rdma_utils.so.1.0 00:02:18.195 SO libspdk_json.so.6.0 00:02:18.195 SYMLINK libspdk_conf.so 00:02:18.195 SYMLINK libspdk_rdma_utils.so 00:02:18.195 SYMLINK libspdk_json.so 00:02:18.195 LIB libspdk_rdma_provider.a 00:02:18.195 SO libspdk_rdma_provider.so.6.0 00:02:18.195 LIB libspdk_idxd.a 00:02:18.195 SO libspdk_idxd.so.12.0 00:02:18.195 SYMLINK libspdk_rdma_provider.so 00:02:18.195 LIB libspdk_vmd.a 00:02:18.455 SO libspdk_vmd.so.6.0 00:02:18.455 SYMLINK libspdk_idxd.so 00:02:18.455 SYMLINK libspdk_vmd.so 00:02:18.455 CC lib/jsonrpc/jsonrpc_server.o 00:02:18.455 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:18.455 CC lib/jsonrpc/jsonrpc_client.o 00:02:18.455 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:18.716 LIB libspdk_jsonrpc.a 00:02:18.716 SO libspdk_jsonrpc.so.6.0 00:02:18.978 SYMLINK libspdk_jsonrpc.so 00:02:18.978 LIB libspdk_env_dpdk.a 00:02:18.978 SO libspdk_env_dpdk.so.15.0 00:02:19.239 SYMLINK libspdk_env_dpdk.so 00:02:19.239 CC lib/rpc/rpc.o 00:02:19.499 LIB libspdk_rpc.a 00:02:19.499 SO libspdk_rpc.so.6.0 00:02:19.499 SYMLINK libspdk_rpc.so 00:02:20.071 CC lib/trace/trace.o 00:02:20.072 CC lib/trace/trace_flags.o 00:02:20.072 CC lib/trace/trace_rpc.o 00:02:20.072 CC lib/notify/notify.o 00:02:20.072 CC lib/notify/notify_rpc.o 00:02:20.072 CC lib/keyring/keyring.o 00:02:20.072 CC lib/keyring/keyring_rpc.o 00:02:20.072 LIB libspdk_notify.a 00:02:20.072 SO libspdk_notify.so.6.0 00:02:20.072 LIB libspdk_keyring.a 00:02:20.072 LIB libspdk_trace.a 00:02:20.332 SO libspdk_keyring.so.1.0 00:02:20.332 SYMLINK libspdk_notify.so 00:02:20.332 SO libspdk_trace.so.10.0 00:02:20.332 SYMLINK libspdk_keyring.so 00:02:20.332 SYMLINK libspdk_trace.so 00:02:20.592 CC lib/sock/sock.o 00:02:20.592 CC lib/sock/sock_rpc.o 00:02:20.592 CC lib/thread/thread.o 00:02:20.592 CC lib/thread/iobuf.o 00:02:21.163 LIB libspdk_sock.a 00:02:21.163 SO libspdk_sock.so.10.0 00:02:21.163 SYMLINK libspdk_sock.so 00:02:21.423 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:21.423 CC lib/nvme/nvme_ctrlr.o 00:02:21.423 CC lib/nvme/nvme_fabric.o 00:02:21.423 CC lib/nvme/nvme_ns_cmd.o 00:02:21.423 CC lib/nvme/nvme_ns.o 00:02:21.423 CC lib/nvme/nvme_pcie_common.o 00:02:21.423 CC lib/nvme/nvme_pcie.o 00:02:21.423 CC lib/nvme/nvme_qpair.o 00:02:21.423 CC lib/nvme/nvme.o 00:02:21.423 CC lib/nvme/nvme_quirks.o 00:02:21.423 CC lib/nvme/nvme_transport.o 00:02:21.423 CC lib/nvme/nvme_discovery.o 00:02:21.423 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:21.423 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:21.423 CC lib/nvme/nvme_tcp.o 00:02:21.423 CC lib/nvme/nvme_opal.o 00:02:21.423 CC lib/nvme/nvme_io_msg.o 00:02:21.423 CC lib/nvme/nvme_poll_group.o 00:02:21.423 CC lib/nvme/nvme_zns.o 00:02:21.423 CC lib/nvme/nvme_stubs.o 00:02:21.423 CC lib/nvme/nvme_auth.o 00:02:21.423 CC lib/nvme/nvme_cuse.o 00:02:21.423 CC lib/nvme/nvme_vfio_user.o 00:02:21.423 CC lib/nvme/nvme_rdma.o 00:02:21.994 LIB libspdk_thread.a 00:02:21.994 SO libspdk_thread.so.10.1 00:02:21.994 SYMLINK libspdk_thread.so 00:02:22.255 CC lib/accel/accel.o 00:02:22.255 CC lib/accel/accel_rpc.o 00:02:22.255 CC lib/accel/accel_sw.o 00:02:22.255 CC lib/vfu_tgt/tgt_endpoint.o 00:02:22.255 CC lib/vfu_tgt/tgt_rpc.o 00:02:22.255 CC lib/blob/blobstore.o 00:02:22.255 CC lib/virtio/virtio.o 00:02:22.255 CC lib/blob/request.o 00:02:22.255 CC lib/virtio/virtio_vhost_user.o 00:02:22.255 CC lib/blob/zeroes.o 00:02:22.255 CC lib/virtio/virtio_vfio_user.o 00:02:22.255 CC lib/blob/blob_bs_dev.o 00:02:22.255 CC lib/virtio/virtio_pci.o 00:02:22.516 CC lib/init/json_config.o 00:02:22.516 CC lib/init/subsystem.o 00:02:22.516 CC lib/init/subsystem_rpc.o 00:02:22.516 CC lib/init/rpc.o 00:02:22.516 LIB libspdk_init.a 00:02:22.778 LIB libspdk_vfu_tgt.a 00:02:22.778 SO libspdk_init.so.5.0 00:02:22.778 SO libspdk_vfu_tgt.so.3.0 00:02:22.778 LIB libspdk_virtio.a 00:02:22.778 SO libspdk_virtio.so.7.0 00:02:22.778 SYMLINK libspdk_init.so 00:02:22.778 SYMLINK libspdk_vfu_tgt.so 00:02:22.778 SYMLINK libspdk_virtio.so 00:02:23.038 CC lib/event/app.o 00:02:23.038 CC lib/event/reactor.o 00:02:23.038 CC lib/event/log_rpc.o 00:02:23.038 CC lib/event/app_rpc.o 00:02:23.038 CC lib/event/scheduler_static.o 00:02:23.298 LIB libspdk_accel.a 00:02:23.298 SO libspdk_accel.so.16.0 00:02:23.298 SYMLINK libspdk_accel.so 00:02:23.559 LIB libspdk_event.a 00:02:23.559 SO libspdk_event.so.14.0 00:02:23.559 SYMLINK libspdk_event.so 00:02:23.559 CC lib/bdev/bdev.o 00:02:23.559 CC lib/bdev/bdev_rpc.o 00:02:23.559 CC lib/bdev/bdev_zone.o 00:02:23.559 CC lib/bdev/scsi_nvme.o 00:02:23.559 CC lib/bdev/part.o 00:02:24.943 LIB libspdk_blob.a 00:02:24.943 SO libspdk_blob.so.11.0 00:02:24.943 LIB libspdk_nvme.a 00:02:24.943 SYMLINK libspdk_blob.so 00:02:24.943 SO libspdk_nvme.so.13.1 00:02:25.203 SYMLINK libspdk_nvme.so 00:02:25.203 CC lib/blobfs/blobfs.o 00:02:25.203 CC lib/lvol/lvol.o 00:02:25.203 CC lib/blobfs/tree.o 00:02:25.774 LIB libspdk_bdev.a 00:02:25.774 SO libspdk_bdev.so.16.0 00:02:26.034 SYMLINK libspdk_bdev.so 00:02:26.034 LIB libspdk_blobfs.a 00:02:26.034 SO libspdk_blobfs.so.10.0 00:02:26.034 LIB libspdk_lvol.a 00:02:26.034 SYMLINK libspdk_blobfs.so 00:02:26.034 SO libspdk_lvol.so.10.0 00:02:26.293 SYMLINK libspdk_lvol.so 00:02:26.293 CC lib/ublk/ublk.o 00:02:26.293 CC lib/ublk/ublk_rpc.o 00:02:26.293 CC lib/ftl/ftl_core.o 00:02:26.293 CC lib/ftl/ftl_init.o 00:02:26.293 CC lib/ftl/ftl_layout.o 00:02:26.293 CC lib/ftl/ftl_debug.o 00:02:26.293 CC lib/scsi/dev.o 00:02:26.293 CC lib/ftl/ftl_io.o 00:02:26.293 CC lib/scsi/lun.o 00:02:26.293 CC lib/ftl/ftl_sb.o 00:02:26.293 CC lib/scsi/port.o 00:02:26.293 CC lib/ftl/ftl_l2p_flat.o 00:02:26.293 CC lib/ftl/ftl_l2p.o 00:02:26.293 CC lib/scsi/scsi.o 00:02:26.293 CC lib/scsi/scsi_bdev.o 00:02:26.293 CC lib/nvmf/ctrlr.o 00:02:26.293 CC lib/ftl/ftl_nv_cache.o 00:02:26.293 CC lib/scsi/scsi_pr.o 00:02:26.293 CC lib/nvmf/ctrlr_discovery.o 00:02:26.293 CC lib/ftl/ftl_band.o 00:02:26.293 CC lib/scsi/scsi_rpc.o 00:02:26.293 CC lib/ftl/ftl_band_ops.o 00:02:26.293 CC lib/nbd/nbd.o 00:02:26.293 CC lib/nvmf/ctrlr_bdev.o 00:02:26.294 CC lib/ftl/ftl_writer.o 00:02:26.294 CC lib/scsi/task.o 00:02:26.294 CC lib/nbd/nbd_rpc.o 00:02:26.294 CC lib/nvmf/subsystem.o 00:02:26.294 CC lib/ftl/ftl_rq.o 00:02:26.294 CC lib/nvmf/nvmf.o 00:02:26.294 CC lib/nvmf/nvmf_rpc.o 00:02:26.294 CC lib/ftl/ftl_reloc.o 00:02:26.294 CC lib/ftl/ftl_l2p_cache.o 00:02:26.294 CC lib/ftl/ftl_p2l.o 00:02:26.294 CC lib/nvmf/transport.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt.o 00:02:26.294 CC lib/nvmf/tcp.o 00:02:26.294 CC lib/nvmf/stubs.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:26.294 CC lib/nvmf/mdns_server.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:26.294 CC lib/nvmf/vfio_user.o 00:02:26.294 CC lib/nvmf/rdma.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:26.294 CC lib/nvmf/auth.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:26.294 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:26.294 CC lib/ftl/utils/ftl_conf.o 00:02:26.294 CC lib/ftl/utils/ftl_md.o 00:02:26.294 CC lib/ftl/utils/ftl_mempool.o 00:02:26.294 CC lib/ftl/utils/ftl_bitmap.o 00:02:26.294 CC lib/ftl/utils/ftl_property.o 00:02:26.294 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:26.294 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:26.294 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:26.294 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:26.294 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:26.294 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:26.294 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:26.294 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:26.294 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:26.294 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:26.294 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:26.294 CC lib/ftl/base/ftl_base_dev.o 00:02:26.294 CC lib/ftl/base/ftl_base_bdev.o 00:02:26.294 CC lib/ftl/ftl_trace.o 00:02:26.860 LIB libspdk_nbd.a 00:02:27.120 SO libspdk_nbd.so.7.0 00:02:27.120 LIB libspdk_scsi.a 00:02:27.120 SYMLINK libspdk_nbd.so 00:02:27.120 SO libspdk_scsi.so.9.0 00:02:27.120 LIB libspdk_ublk.a 00:02:27.120 SO libspdk_ublk.so.3.0 00:02:27.120 SYMLINK libspdk_scsi.so 00:02:27.379 SYMLINK libspdk_ublk.so 00:02:27.639 CC lib/iscsi/conn.o 00:02:27.639 CC lib/iscsi/init_grp.o 00:02:27.639 CC lib/iscsi/iscsi.o 00:02:27.639 CC lib/iscsi/md5.o 00:02:27.639 CC lib/iscsi/param.o 00:02:27.639 CC lib/iscsi/portal_grp.o 00:02:27.639 CC lib/vhost/vhost.o 00:02:27.639 CC lib/vhost/vhost_rpc.o 00:02:27.639 CC lib/iscsi/tgt_node.o 00:02:27.639 CC lib/iscsi/iscsi_subsystem.o 00:02:27.639 CC lib/vhost/vhost_blk.o 00:02:27.639 CC lib/vhost/vhost_scsi.o 00:02:27.639 CC lib/iscsi/iscsi_rpc.o 00:02:27.639 CC lib/iscsi/task.o 00:02:27.639 CC lib/vhost/rte_vhost_user.o 00:02:27.639 LIB libspdk_ftl.a 00:02:27.900 SO libspdk_ftl.so.9.0 00:02:28.160 SYMLINK libspdk_ftl.so 00:02:28.160 LIB libspdk_nvmf.a 00:02:28.421 SO libspdk_nvmf.so.19.0 00:02:28.682 SYMLINK libspdk_nvmf.so 00:02:28.682 LIB libspdk_vhost.a 00:02:28.682 SO libspdk_vhost.so.8.0 00:02:28.682 SYMLINK libspdk_vhost.so 00:02:29.255 LIB libspdk_iscsi.a 00:02:29.255 SO libspdk_iscsi.so.8.0 00:02:29.255 SYMLINK libspdk_iscsi.so 00:02:29.827 CC module/vfu_device/vfu_virtio.o 00:02:29.827 CC module/vfu_device/vfu_virtio_blk.o 00:02:29.827 CC module/vfu_device/vfu_virtio_scsi.o 00:02:29.827 CC module/vfu_device/vfu_virtio_rpc.o 00:02:29.827 CC module/env_dpdk/env_dpdk_rpc.o 00:02:30.088 LIB libspdk_env_dpdk_rpc.a 00:02:30.088 CC module/accel/error/accel_error.o 00:02:30.088 CC module/accel/error/accel_error_rpc.o 00:02:30.088 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:30.088 CC module/accel/dsa/accel_dsa.o 00:02:30.088 CC module/accel/dsa/accel_dsa_rpc.o 00:02:30.088 CC module/accel/ioat/accel_ioat.o 00:02:30.088 CC module/accel/ioat/accel_ioat_rpc.o 00:02:30.088 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:30.088 CC module/accel/iaa/accel_iaa.o 00:02:30.088 CC module/accel/iaa/accel_iaa_rpc.o 00:02:30.088 CC module/blob/bdev/blob_bdev.o 00:02:30.088 CC module/sock/posix/posix.o 00:02:30.088 CC module/keyring/linux/keyring_rpc.o 00:02:30.088 CC module/scheduler/gscheduler/gscheduler.o 00:02:30.088 CC module/keyring/linux/keyring.o 00:02:30.088 CC module/keyring/file/keyring.o 00:02:30.088 CC module/keyring/file/keyring_rpc.o 00:02:30.088 SO libspdk_env_dpdk_rpc.so.6.0 00:02:30.088 SYMLINK libspdk_env_dpdk_rpc.so 00:02:30.348 LIB libspdk_scheduler_dpdk_governor.a 00:02:30.348 LIB libspdk_keyring_linux.a 00:02:30.348 LIB libspdk_keyring_file.a 00:02:30.348 LIB libspdk_accel_error.a 00:02:30.348 LIB libspdk_accel_iaa.a 00:02:30.348 LIB libspdk_accel_ioat.a 00:02:30.348 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:30.348 LIB libspdk_scheduler_dynamic.a 00:02:30.348 SO libspdk_accel_error.so.2.0 00:02:30.348 SO libspdk_keyring_linux.so.1.0 00:02:30.348 SO libspdk_accel_iaa.so.3.0 00:02:30.348 SO libspdk_keyring_file.so.1.0 00:02:30.348 SO libspdk_accel_ioat.so.6.0 00:02:30.348 LIB libspdk_accel_dsa.a 00:02:30.348 LIB libspdk_blob_bdev.a 00:02:30.348 SO libspdk_scheduler_dynamic.so.4.0 00:02:30.348 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:30.348 SYMLINK libspdk_accel_error.so 00:02:30.348 SO libspdk_blob_bdev.so.11.0 00:02:30.348 SO libspdk_accel_dsa.so.5.0 00:02:30.348 SYMLINK libspdk_accel_ioat.so 00:02:30.348 SYMLINK libspdk_accel_iaa.so 00:02:30.348 SYMLINK libspdk_keyring_file.so 00:02:30.348 SYMLINK libspdk_keyring_linux.so 00:02:30.348 SYMLINK libspdk_scheduler_dynamic.so 00:02:30.348 LIB libspdk_vfu_device.a 00:02:30.348 SYMLINK libspdk_blob_bdev.so 00:02:30.348 LIB libspdk_scheduler_gscheduler.a 00:02:30.348 SYMLINK libspdk_accel_dsa.so 00:02:30.609 SO libspdk_vfu_device.so.3.0 00:02:30.609 SO libspdk_scheduler_gscheduler.so.4.0 00:02:30.609 SYMLINK libspdk_scheduler_gscheduler.so 00:02:30.609 SYMLINK libspdk_vfu_device.so 00:02:30.870 LIB libspdk_sock_posix.a 00:02:30.870 SO libspdk_sock_posix.so.6.0 00:02:31.130 CC module/bdev/delay/vbdev_delay.o 00:02:31.130 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:31.130 CC module/bdev/gpt/gpt.o 00:02:31.130 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:31.130 CC module/bdev/lvol/vbdev_lvol.o 00:02:31.130 SYMLINK libspdk_sock_posix.so 00:02:31.130 CC module/bdev/error/vbdev_error.o 00:02:31.130 CC module/blobfs/bdev/blobfs_bdev.o 00:02:31.130 CC module/bdev/gpt/vbdev_gpt.o 00:02:31.130 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:31.130 CC module/bdev/error/vbdev_error_rpc.o 00:02:31.130 CC module/bdev/aio/bdev_aio.o 00:02:31.130 CC module/bdev/raid/bdev_raid.o 00:02:31.130 CC module/bdev/ftl/bdev_ftl.o 00:02:31.130 CC module/bdev/malloc/bdev_malloc.o 00:02:31.130 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:31.130 CC module/bdev/aio/bdev_aio_rpc.o 00:02:31.130 CC module/bdev/raid/bdev_raid_rpc.o 00:02:31.130 CC module/bdev/raid/bdev_raid_sb.o 00:02:31.130 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:31.130 CC module/bdev/raid/raid0.o 00:02:31.130 CC module/bdev/split/vbdev_split.o 00:02:31.130 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:31.130 CC module/bdev/raid/raid1.o 00:02:31.130 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:31.130 CC module/bdev/null/bdev_null.o 00:02:31.130 CC module/bdev/split/vbdev_split_rpc.o 00:02:31.130 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:31.130 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:31.130 CC module/bdev/raid/concat.o 00:02:31.130 CC module/bdev/null/bdev_null_rpc.o 00:02:31.130 CC module/bdev/passthru/vbdev_passthru.o 00:02:31.130 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:31.130 CC module/bdev/nvme/bdev_nvme.o 00:02:31.130 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:31.130 CC module/bdev/iscsi/bdev_iscsi.o 00:02:31.130 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:31.130 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:31.130 CC module/bdev/nvme/nvme_rpc.o 00:02:31.130 CC module/bdev/nvme/bdev_mdns_client.o 00:02:31.130 CC module/bdev/nvme/vbdev_opal.o 00:02:31.130 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:31.130 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:31.389 LIB libspdk_blobfs_bdev.a 00:02:31.390 SO libspdk_blobfs_bdev.so.6.0 00:02:31.390 LIB libspdk_bdev_error.a 00:02:31.390 LIB libspdk_bdev_gpt.a 00:02:31.390 LIB libspdk_bdev_null.a 00:02:31.390 SO libspdk_bdev_error.so.6.0 00:02:31.390 SYMLINK libspdk_blobfs_bdev.so 00:02:31.390 SO libspdk_bdev_null.so.6.0 00:02:31.390 SO libspdk_bdev_gpt.so.6.0 00:02:31.390 LIB libspdk_bdev_aio.a 00:02:31.390 LIB libspdk_bdev_ftl.a 00:02:31.390 LIB libspdk_bdev_zone_block.a 00:02:31.390 SYMLINK libspdk_bdev_error.so 00:02:31.390 SO libspdk_bdev_aio.so.6.0 00:02:31.390 SYMLINK libspdk_bdev_gpt.so 00:02:31.390 SYMLINK libspdk_bdev_null.so 00:02:31.390 LIB libspdk_bdev_delay.a 00:02:31.390 LIB libspdk_bdev_passthru.a 00:02:31.390 SO libspdk_bdev_ftl.so.6.0 00:02:31.390 LIB libspdk_bdev_malloc.a 00:02:31.390 SO libspdk_bdev_zone_block.so.6.0 00:02:31.390 LIB libspdk_bdev_iscsi.a 00:02:31.651 SO libspdk_bdev_delay.so.6.0 00:02:31.651 SO libspdk_bdev_passthru.so.6.0 00:02:31.651 SO libspdk_bdev_malloc.so.6.0 00:02:31.651 SYMLINK libspdk_bdev_aio.so 00:02:31.651 SYMLINK libspdk_bdev_ftl.so 00:02:31.651 SO libspdk_bdev_iscsi.so.6.0 00:02:31.651 LIB libspdk_bdev_lvol.a 00:02:31.651 SYMLINK libspdk_bdev_zone_block.so 00:02:31.651 SYMLINK libspdk_bdev_delay.so 00:02:31.651 SYMLINK libspdk_bdev_passthru.so 00:02:31.651 SO libspdk_bdev_lvol.so.6.0 00:02:31.651 SYMLINK libspdk_bdev_malloc.so 00:02:31.651 LIB libspdk_bdev_virtio.a 00:02:31.651 LIB libspdk_bdev_split.a 00:02:31.651 SYMLINK libspdk_bdev_iscsi.so 00:02:31.651 SO libspdk_bdev_virtio.so.6.0 00:02:31.651 SO libspdk_bdev_split.so.6.0 00:02:31.651 SYMLINK libspdk_bdev_lvol.so 00:02:31.651 SYMLINK libspdk_bdev_split.so 00:02:31.651 SYMLINK libspdk_bdev_virtio.so 00:02:31.911 LIB libspdk_bdev_raid.a 00:02:31.911 SO libspdk_bdev_raid.so.6.0 00:02:32.171 SYMLINK libspdk_bdev_raid.so 00:02:33.111 LIB libspdk_bdev_nvme.a 00:02:33.111 SO libspdk_bdev_nvme.so.7.0 00:02:33.111 SYMLINK libspdk_bdev_nvme.so 00:02:33.682 CC module/event/subsystems/iobuf/iobuf.o 00:02:33.682 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:33.682 CC module/event/subsystems/scheduler/scheduler.o 00:02:33.682 CC module/event/subsystems/sock/sock.o 00:02:33.943 CC module/event/subsystems/keyring/keyring.o 00:02:33.943 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:33.943 CC module/event/subsystems/vmd/vmd.o 00:02:33.943 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:33.943 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:33.943 LIB libspdk_event_sock.a 00:02:33.943 LIB libspdk_event_keyring.a 00:02:33.943 LIB libspdk_event_scheduler.a 00:02:33.943 LIB libspdk_event_iobuf.a 00:02:33.943 LIB libspdk_event_vfu_tgt.a 00:02:33.943 LIB libspdk_event_vmd.a 00:02:33.943 LIB libspdk_event_vhost_blk.a 00:02:33.943 SO libspdk_event_sock.so.5.0 00:02:33.943 SO libspdk_event_keyring.so.1.0 00:02:33.943 SO libspdk_event_scheduler.so.4.0 00:02:33.943 SO libspdk_event_iobuf.so.3.0 00:02:33.943 SO libspdk_event_vhost_blk.so.3.0 00:02:33.943 SO libspdk_event_vfu_tgt.so.3.0 00:02:33.943 SO libspdk_event_vmd.so.6.0 00:02:34.203 SYMLINK libspdk_event_keyring.so 00:02:34.203 SYMLINK libspdk_event_sock.so 00:02:34.203 SYMLINK libspdk_event_vhost_blk.so 00:02:34.203 SYMLINK libspdk_event_iobuf.so 00:02:34.203 SYMLINK libspdk_event_scheduler.so 00:02:34.203 SYMLINK libspdk_event_vfu_tgt.so 00:02:34.203 SYMLINK libspdk_event_vmd.so 00:02:34.463 CC module/event/subsystems/accel/accel.o 00:02:34.723 LIB libspdk_event_accel.a 00:02:34.723 SO libspdk_event_accel.so.6.0 00:02:34.723 SYMLINK libspdk_event_accel.so 00:02:34.984 CC module/event/subsystems/bdev/bdev.o 00:02:35.244 LIB libspdk_event_bdev.a 00:02:35.244 SO libspdk_event_bdev.so.6.0 00:02:35.244 SYMLINK libspdk_event_bdev.so 00:02:35.815 CC module/event/subsystems/scsi/scsi.o 00:02:35.815 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:35.815 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:35.815 CC module/event/subsystems/nbd/nbd.o 00:02:35.815 CC module/event/subsystems/ublk/ublk.o 00:02:35.815 LIB libspdk_event_nbd.a 00:02:35.815 LIB libspdk_event_ublk.a 00:02:35.815 LIB libspdk_event_scsi.a 00:02:35.815 SO libspdk_event_nbd.so.6.0 00:02:35.815 SO libspdk_event_ublk.so.3.0 00:02:35.815 SO libspdk_event_scsi.so.6.0 00:02:35.815 LIB libspdk_event_nvmf.a 00:02:36.077 SYMLINK libspdk_event_ublk.so 00:02:36.077 SYMLINK libspdk_event_nbd.so 00:02:36.077 SYMLINK libspdk_event_scsi.so 00:02:36.077 SO libspdk_event_nvmf.so.6.0 00:02:36.077 SYMLINK libspdk_event_nvmf.so 00:02:36.337 CC module/event/subsystems/iscsi/iscsi.o 00:02:36.337 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:36.598 LIB libspdk_event_vhost_scsi.a 00:02:36.598 LIB libspdk_event_iscsi.a 00:02:36.598 SO libspdk_event_vhost_scsi.so.3.0 00:02:36.598 SO libspdk_event_iscsi.so.6.0 00:02:36.598 SYMLINK libspdk_event_vhost_scsi.so 00:02:36.598 SYMLINK libspdk_event_iscsi.so 00:02:36.858 SO libspdk.so.6.0 00:02:36.858 SYMLINK libspdk.so 00:02:37.430 CXX app/trace/trace.o 00:02:37.430 CC app/spdk_nvme_discover/discovery_aer.o 00:02:37.430 CC app/trace_record/trace_record.o 00:02:37.430 CC test/rpc_client/rpc_client_test.o 00:02:37.430 CC app/spdk_nvme_perf/perf.o 00:02:37.430 CC app/spdk_lspci/spdk_lspci.o 00:02:37.430 TEST_HEADER include/spdk/accel.h 00:02:37.430 TEST_HEADER include/spdk/assert.h 00:02:37.430 TEST_HEADER include/spdk/accel_module.h 00:02:37.430 CC app/spdk_nvme_identify/identify.o 00:02:37.430 CC app/spdk_top/spdk_top.o 00:02:37.430 TEST_HEADER include/spdk/barrier.h 00:02:37.430 TEST_HEADER include/spdk/base64.h 00:02:37.430 TEST_HEADER include/spdk/bdev.h 00:02:37.430 TEST_HEADER include/spdk/bdev_module.h 00:02:37.430 TEST_HEADER include/spdk/bit_array.h 00:02:37.430 TEST_HEADER include/spdk/bdev_zone.h 00:02:37.430 TEST_HEADER include/spdk/bit_pool.h 00:02:37.430 TEST_HEADER include/spdk/blob_bdev.h 00:02:37.430 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:37.430 TEST_HEADER include/spdk/blobfs.h 00:02:37.430 TEST_HEADER include/spdk/blob.h 00:02:37.430 TEST_HEADER include/spdk/conf.h 00:02:37.430 TEST_HEADER include/spdk/config.h 00:02:37.430 TEST_HEADER include/spdk/cpuset.h 00:02:37.430 TEST_HEADER include/spdk/crc16.h 00:02:37.430 TEST_HEADER include/spdk/crc32.h 00:02:37.430 TEST_HEADER include/spdk/crc64.h 00:02:37.430 TEST_HEADER include/spdk/dif.h 00:02:37.430 TEST_HEADER include/spdk/dma.h 00:02:37.430 TEST_HEADER include/spdk/endian.h 00:02:37.430 TEST_HEADER include/spdk/env_dpdk.h 00:02:37.430 TEST_HEADER include/spdk/env.h 00:02:37.430 TEST_HEADER include/spdk/event.h 00:02:37.430 TEST_HEADER include/spdk/fd_group.h 00:02:37.430 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:37.430 TEST_HEADER include/spdk/fd.h 00:02:37.430 TEST_HEADER include/spdk/file.h 00:02:37.430 TEST_HEADER include/spdk/ftl.h 00:02:37.430 TEST_HEADER include/spdk/gpt_spec.h 00:02:37.430 TEST_HEADER include/spdk/histogram_data.h 00:02:37.430 TEST_HEADER include/spdk/hexlify.h 00:02:37.430 TEST_HEADER include/spdk/idxd.h 00:02:37.430 TEST_HEADER include/spdk/idxd_spec.h 00:02:37.430 TEST_HEADER include/spdk/init.h 00:02:37.430 TEST_HEADER include/spdk/ioat.h 00:02:37.430 CC app/spdk_dd/spdk_dd.o 00:02:37.430 TEST_HEADER include/spdk/ioat_spec.h 00:02:37.430 CC app/nvmf_tgt/nvmf_main.o 00:02:37.430 TEST_HEADER include/spdk/json.h 00:02:37.430 TEST_HEADER include/spdk/iscsi_spec.h 00:02:37.430 TEST_HEADER include/spdk/jsonrpc.h 00:02:37.430 TEST_HEADER include/spdk/keyring.h 00:02:37.431 TEST_HEADER include/spdk/keyring_module.h 00:02:37.431 TEST_HEADER include/spdk/likely.h 00:02:37.431 TEST_HEADER include/spdk/log.h 00:02:37.431 TEST_HEADER include/spdk/memory.h 00:02:37.431 TEST_HEADER include/spdk/lvol.h 00:02:37.431 TEST_HEADER include/spdk/mmio.h 00:02:37.431 TEST_HEADER include/spdk/nbd.h 00:02:37.431 TEST_HEADER include/spdk/net.h 00:02:37.431 TEST_HEADER include/spdk/notify.h 00:02:37.431 CC app/iscsi_tgt/iscsi_tgt.o 00:02:37.431 TEST_HEADER include/spdk/nvme.h 00:02:37.431 TEST_HEADER include/spdk/nvme_intel.h 00:02:37.431 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:37.431 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:37.431 TEST_HEADER include/spdk/nvme_zns.h 00:02:37.431 TEST_HEADER include/spdk/nvme_spec.h 00:02:37.431 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:37.431 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:37.431 TEST_HEADER include/spdk/nvmf.h 00:02:37.431 TEST_HEADER include/spdk/nvmf_spec.h 00:02:37.431 TEST_HEADER include/spdk/nvmf_transport.h 00:02:37.431 TEST_HEADER include/spdk/opal.h 00:02:37.431 TEST_HEADER include/spdk/opal_spec.h 00:02:37.431 TEST_HEADER include/spdk/pci_ids.h 00:02:37.431 TEST_HEADER include/spdk/pipe.h 00:02:37.431 TEST_HEADER include/spdk/queue.h 00:02:37.431 TEST_HEADER include/spdk/reduce.h 00:02:37.431 TEST_HEADER include/spdk/rpc.h 00:02:37.431 TEST_HEADER include/spdk/scsi.h 00:02:37.431 TEST_HEADER include/spdk/scheduler.h 00:02:37.431 TEST_HEADER include/spdk/scsi_spec.h 00:02:37.431 CC app/spdk_tgt/spdk_tgt.o 00:02:37.431 TEST_HEADER include/spdk/sock.h 00:02:37.431 TEST_HEADER include/spdk/stdinc.h 00:02:37.431 TEST_HEADER include/spdk/string.h 00:02:37.431 TEST_HEADER include/spdk/thread.h 00:02:37.431 TEST_HEADER include/spdk/trace_parser.h 00:02:37.431 TEST_HEADER include/spdk/trace.h 00:02:37.431 TEST_HEADER include/spdk/tree.h 00:02:37.431 TEST_HEADER include/spdk/uuid.h 00:02:37.431 TEST_HEADER include/spdk/ublk.h 00:02:37.431 TEST_HEADER include/spdk/util.h 00:02:37.431 TEST_HEADER include/spdk/version.h 00:02:37.431 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:37.431 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:37.431 TEST_HEADER include/spdk/vhost.h 00:02:37.431 TEST_HEADER include/spdk/vmd.h 00:02:37.431 TEST_HEADER include/spdk/xor.h 00:02:37.431 TEST_HEADER include/spdk/zipf.h 00:02:37.431 CXX test/cpp_headers/accel.o 00:02:37.431 CXX test/cpp_headers/accel_module.o 00:02:37.431 CXX test/cpp_headers/assert.o 00:02:37.431 CXX test/cpp_headers/barrier.o 00:02:37.431 CXX test/cpp_headers/base64.o 00:02:37.431 CXX test/cpp_headers/bdev.o 00:02:37.431 CXX test/cpp_headers/bdev_module.o 00:02:37.431 CXX test/cpp_headers/bdev_zone.o 00:02:37.431 CXX test/cpp_headers/bit_array.o 00:02:37.431 CXX test/cpp_headers/blob_bdev.o 00:02:37.431 CXX test/cpp_headers/bit_pool.o 00:02:37.431 CXX test/cpp_headers/blobfs_bdev.o 00:02:37.431 CXX test/cpp_headers/blobfs.o 00:02:37.431 CXX test/cpp_headers/conf.o 00:02:37.431 CXX test/cpp_headers/config.o 00:02:37.431 CXX test/cpp_headers/blob.o 00:02:37.431 CXX test/cpp_headers/cpuset.o 00:02:37.431 CXX test/cpp_headers/crc32.o 00:02:37.431 CXX test/cpp_headers/crc16.o 00:02:37.431 CXX test/cpp_headers/dif.o 00:02:37.431 CXX test/cpp_headers/crc64.o 00:02:37.431 CXX test/cpp_headers/endian.o 00:02:37.431 CXX test/cpp_headers/dma.o 00:02:37.431 CXX test/cpp_headers/env_dpdk.o 00:02:37.431 CXX test/cpp_headers/env.o 00:02:37.431 CXX test/cpp_headers/event.o 00:02:37.431 CXX test/cpp_headers/fd_group.o 00:02:37.431 CXX test/cpp_headers/fd.o 00:02:37.431 CXX test/cpp_headers/ftl.o 00:02:37.431 CXX test/cpp_headers/file.o 00:02:37.431 CXX test/cpp_headers/gpt_spec.o 00:02:37.431 CXX test/cpp_headers/hexlify.o 00:02:37.431 CXX test/cpp_headers/histogram_data.o 00:02:37.431 CXX test/cpp_headers/idxd.o 00:02:37.431 CXX test/cpp_headers/idxd_spec.o 00:02:37.431 CXX test/cpp_headers/ioat.o 00:02:37.431 CXX test/cpp_headers/ioat_spec.o 00:02:37.431 CXX test/cpp_headers/init.o 00:02:37.431 CXX test/cpp_headers/jsonrpc.o 00:02:37.431 CXX test/cpp_headers/iscsi_spec.o 00:02:37.431 CXX test/cpp_headers/keyring.o 00:02:37.431 CC test/thread/poller_perf/poller_perf.o 00:02:37.431 CXX test/cpp_headers/keyring_module.o 00:02:37.431 CXX test/cpp_headers/json.o 00:02:37.431 CXX test/cpp_headers/likely.o 00:02:37.431 CC test/app/jsoncat/jsoncat.o 00:02:37.431 CXX test/cpp_headers/log.o 00:02:37.431 CXX test/cpp_headers/lvol.o 00:02:37.431 CXX test/cpp_headers/net.o 00:02:37.431 CXX test/cpp_headers/memory.o 00:02:37.431 CXX test/cpp_headers/nbd.o 00:02:37.431 CXX test/cpp_headers/notify.o 00:02:37.431 CXX test/cpp_headers/nvme.o 00:02:37.431 CXX test/cpp_headers/mmio.o 00:02:37.431 CXX test/cpp_headers/nvme_intel.o 00:02:37.431 CXX test/cpp_headers/nvme_ocssd.o 00:02:37.431 CXX test/cpp_headers/nvme_spec.o 00:02:37.431 CXX test/cpp_headers/nvmf_cmd.o 00:02:37.431 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:37.431 CC examples/util/zipf/zipf.o 00:02:37.431 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:37.431 CXX test/cpp_headers/nvme_zns.o 00:02:37.431 CXX test/cpp_headers/opal_spec.o 00:02:37.431 CXX test/cpp_headers/nvmf_spec.o 00:02:37.431 CXX test/cpp_headers/nvmf_transport.o 00:02:37.431 CC test/app/histogram_perf/histogram_perf.o 00:02:37.431 CXX test/cpp_headers/opal.o 00:02:37.431 CXX test/cpp_headers/nvmf.o 00:02:37.431 CXX test/cpp_headers/pipe.o 00:02:37.431 CC test/app/stub/stub.o 00:02:37.431 CXX test/cpp_headers/pci_ids.o 00:02:37.431 CXX test/cpp_headers/rpc.o 00:02:37.431 CXX test/cpp_headers/queue.o 00:02:37.431 CC examples/ioat/verify/verify.o 00:02:37.695 CXX test/cpp_headers/reduce.o 00:02:37.695 CXX test/cpp_headers/scheduler.o 00:02:37.695 CC test/env/pci/pci_ut.o 00:02:37.695 CXX test/cpp_headers/scsi_spec.o 00:02:37.695 CXX test/cpp_headers/string.o 00:02:37.695 CXX test/cpp_headers/scsi.o 00:02:37.695 CXX test/cpp_headers/stdinc.o 00:02:37.695 CXX test/cpp_headers/sock.o 00:02:37.695 CXX test/cpp_headers/thread.o 00:02:37.695 CXX test/cpp_headers/trace.o 00:02:37.695 CC app/fio/nvme/fio_plugin.o 00:02:37.695 CXX test/cpp_headers/trace_parser.o 00:02:37.695 CC test/env/vtophys/vtophys.o 00:02:37.695 CXX test/cpp_headers/uuid.o 00:02:37.695 CXX test/cpp_headers/tree.o 00:02:37.695 CC test/env/memory/memory_ut.o 00:02:37.695 CXX test/cpp_headers/util.o 00:02:37.695 CXX test/cpp_headers/ublk.o 00:02:37.695 CXX test/cpp_headers/version.o 00:02:37.695 CXX test/cpp_headers/vfio_user_pci.o 00:02:37.695 LINK spdk_nvme_discover 00:02:37.695 CXX test/cpp_headers/vfio_user_spec.o 00:02:37.695 CXX test/cpp_headers/vhost.o 00:02:37.695 CXX test/cpp_headers/vmd.o 00:02:37.695 CXX test/cpp_headers/zipf.o 00:02:37.695 CXX test/cpp_headers/xor.o 00:02:37.695 CC examples/ioat/perf/perf.o 00:02:37.695 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:37.695 CC test/dma/test_dma/test_dma.o 00:02:37.695 LINK rpc_client_test 00:02:37.695 CC test/app/bdev_svc/bdev_svc.o 00:02:37.695 LINK interrupt_tgt 00:02:37.695 CC app/fio/bdev/fio_plugin.o 00:02:37.963 LINK spdk_trace_record 00:02:37.963 LINK nvmf_tgt 00:02:38.225 CC test/env/mem_callbacks/mem_callbacks.o 00:02:38.225 LINK spdk_lspci 00:02:38.225 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:38.225 LINK iscsi_tgt 00:02:38.225 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:38.225 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:38.225 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:38.225 LINK spdk_tgt 00:02:38.225 LINK zipf 00:02:38.225 LINK poller_perf 00:02:38.484 LINK jsoncat 00:02:38.484 LINK histogram_perf 00:02:38.484 LINK spdk_dd 00:02:38.484 LINK bdev_svc 00:02:38.484 LINK ioat_perf 00:02:38.484 LINK vtophys 00:02:38.484 LINK stub 00:02:38.484 LINK env_dpdk_post_init 00:02:38.484 LINK verify 00:02:38.484 LINK spdk_trace 00:02:38.744 LINK test_dma 00:02:38.744 LINK spdk_bdev 00:02:39.004 LINK spdk_nvme 00:02:39.004 LINK vhost_fuzz 00:02:39.004 CC examples/vmd/led/led.o 00:02:39.004 CC examples/vmd/lsvmd/lsvmd.o 00:02:39.004 CC test/event/reactor/reactor.o 00:02:39.004 CC examples/idxd/perf/perf.o 00:02:39.004 CC test/event/reactor_perf/reactor_perf.o 00:02:39.004 CC examples/sock/hello_world/hello_sock.o 00:02:39.004 CC test/event/event_perf/event_perf.o 00:02:39.004 LINK nvme_fuzz 00:02:39.004 LINK pci_ut 00:02:39.004 CC examples/thread/thread/thread_ex.o 00:02:39.004 CC test/event/app_repeat/app_repeat.o 00:02:39.004 CC test/event/scheduler/scheduler.o 00:02:39.004 LINK spdk_nvme_perf 00:02:39.004 CC app/vhost/vhost.o 00:02:39.004 LINK spdk_nvme_identify 00:02:39.293 LINK reactor 00:02:39.293 LINK lsvmd 00:02:39.293 LINK led 00:02:39.293 LINK reactor_perf 00:02:39.293 LINK spdk_top 00:02:39.293 LINK mem_callbacks 00:02:39.293 LINK event_perf 00:02:39.293 LINK app_repeat 00:02:39.293 LINK scheduler 00:02:39.293 LINK hello_sock 00:02:39.293 LINK thread 00:02:39.293 LINK idxd_perf 00:02:39.293 LINK vhost 00:02:39.293 CC test/nvme/overhead/overhead.o 00:02:39.293 CC test/nvme/e2edp/nvme_dp.o 00:02:39.293 CC test/nvme/sgl/sgl.o 00:02:39.293 CC test/nvme/connect_stress/connect_stress.o 00:02:39.293 CC test/nvme/aer/aer.o 00:02:39.293 CC test/nvme/reserve/reserve.o 00:02:39.293 CC test/nvme/simple_copy/simple_copy.o 00:02:39.293 CC test/nvme/compliance/nvme_compliance.o 00:02:39.293 CC test/nvme/err_injection/err_injection.o 00:02:39.293 CC test/nvme/startup/startup.o 00:02:39.293 CC test/nvme/fdp/fdp.o 00:02:39.293 CC test/nvme/boot_partition/boot_partition.o 00:02:39.293 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:39.293 CC test/accel/dif/dif.o 00:02:39.293 CC test/nvme/reset/reset.o 00:02:39.293 CC test/nvme/fused_ordering/fused_ordering.o 00:02:39.293 CC test/nvme/cuse/cuse.o 00:02:39.293 CC test/blobfs/mkfs/mkfs.o 00:02:39.579 LINK memory_ut 00:02:39.579 CC test/lvol/esnap/esnap.o 00:02:39.579 LINK startup 00:02:39.579 LINK connect_stress 00:02:39.579 LINK err_injection 00:02:39.579 LINK doorbell_aers 00:02:39.579 LINK fused_ordering 00:02:39.579 LINK reserve 00:02:39.579 LINK overhead 00:02:39.579 LINK nvme_dp 00:02:39.579 LINK simple_copy 00:02:39.579 LINK sgl 00:02:39.579 LINK mkfs 00:02:39.579 LINK aer 00:02:39.579 LINK reset 00:02:39.838 LINK nvme_compliance 00:02:39.838 LINK fdp 00:02:39.838 LINK boot_partition 00:02:39.838 CC examples/nvme/hello_world/hello_world.o 00:02:39.838 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:39.838 CC examples/nvme/hotplug/hotplug.o 00:02:39.838 CC examples/nvme/reconnect/reconnect.o 00:02:39.838 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:39.838 CC examples/nvme/arbitration/arbitration.o 00:02:39.838 CC examples/nvme/abort/abort.o 00:02:39.838 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:39.838 CC examples/blob/hello_world/hello_blob.o 00:02:39.838 CC examples/accel/perf/accel_perf.o 00:02:39.838 CC examples/blob/cli/blobcli.o 00:02:40.107 LINK iscsi_fuzz 00:02:40.107 LINK cmb_copy 00:02:40.107 LINK pmr_persistence 00:02:40.107 LINK hello_world 00:02:40.107 LINK hotplug 00:02:40.107 LINK arbitration 00:02:40.107 LINK abort 00:02:40.107 LINK reconnect 00:02:40.368 LINK hello_blob 00:02:40.368 LINK nvme_manage 00:02:40.368 LINK accel_perf 00:02:40.368 LINK dif 00:02:40.368 LINK blobcli 00:02:40.629 LINK cuse 00:02:40.907 CC examples/bdev/hello_world/hello_bdev.o 00:02:40.907 CC examples/bdev/bdevperf/bdevperf.o 00:02:40.907 CC test/bdev/bdevio/bdevio.o 00:02:41.166 LINK hello_bdev 00:02:41.426 LINK bdevio 00:02:41.686 LINK bdevperf 00:02:42.256 CC examples/nvmf/nvmf/nvmf.o 00:02:42.516 LINK nvmf 00:02:43.457 LINK esnap 00:02:44.030 00:02:44.030 real 0m54.201s 00:02:44.030 user 7m1.156s 00:02:44.030 sys 5m13.589s 00:02:44.030 12:15:17 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:44.030 12:15:17 make -- common/autotest_common.sh@10 -- $ set +x 00:02:44.030 ************************************ 00:02:44.030 END TEST make 00:02:44.030 ************************************ 00:02:44.030 12:15:17 -- common/autotest_common.sh@1142 -- $ return 0 00:02:44.030 12:15:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:44.030 12:15:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:44.030 12:15:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:44.030 12:15:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.030 12:15:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:44.030 12:15:17 -- pm/common@44 -- $ pid=93916 00:02:44.030 12:15:17 -- pm/common@50 -- $ kill -TERM 93916 00:02:44.030 12:15:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.030 12:15:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:44.030 12:15:17 -- pm/common@44 -- $ pid=93917 00:02:44.030 12:15:17 -- pm/common@50 -- $ kill -TERM 93917 00:02:44.030 12:15:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.030 12:15:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:44.030 12:15:17 -- pm/common@44 -- $ pid=93919 00:02:44.030 12:15:17 -- pm/common@50 -- $ kill -TERM 93919 00:02:44.030 12:15:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.030 12:15:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:44.030 12:15:17 -- pm/common@44 -- $ pid=93941 00:02:44.030 12:15:17 -- pm/common@50 -- $ sudo -E kill -TERM 93941 00:02:44.030 12:15:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:44.030 12:15:17 -- nvmf/common.sh@7 -- # uname -s 00:02:44.030 12:15:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:44.030 12:15:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:44.030 12:15:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:44.030 12:15:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:44.030 12:15:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:44.030 12:15:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:44.030 12:15:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:44.030 12:15:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:44.030 12:15:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:44.030 12:15:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:44.030 12:15:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:02:44.030 12:15:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:02:44.030 12:15:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:44.030 12:15:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:44.030 12:15:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:44.030 12:15:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:44.030 12:15:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:44.030 12:15:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:44.030 12:15:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:44.030 12:15:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:44.030 12:15:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.030 12:15:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.030 12:15:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.030 12:15:17 -- paths/export.sh@5 -- # export PATH 00:02:44.291 12:15:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.291 12:15:17 -- nvmf/common.sh@47 -- # : 0 00:02:44.291 12:15:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:44.291 12:15:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:44.291 12:15:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:44.291 12:15:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:44.291 12:15:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:44.291 12:15:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:44.291 12:15:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:44.291 12:15:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:44.291 12:15:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:44.291 12:15:17 -- spdk/autotest.sh@32 -- # uname -s 00:02:44.291 12:15:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:44.291 12:15:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:44.291 12:15:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:44.291 12:15:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:44.291 12:15:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:44.291 12:15:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:44.291 12:15:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:44.291 12:15:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:44.291 12:15:17 -- spdk/autotest.sh@48 -- # udevadm_pid=155583 00:02:44.291 12:15:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:44.291 12:15:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:44.291 12:15:17 -- pm/common@17 -- # local monitor 00:02:44.291 12:15:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.291 12:15:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.291 12:15:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.291 12:15:17 -- pm/common@21 -- # date +%s 00:02:44.291 12:15:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.291 12:15:17 -- pm/common@25 -- # sleep 1 00:02:44.291 12:15:17 -- pm/common@21 -- # date +%s 00:02:44.291 12:15:17 -- pm/common@21 -- # date +%s 00:02:44.291 12:15:17 -- pm/common@21 -- # date +%s 00:02:44.291 12:15:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721902517 00:02:44.291 12:15:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721902517 00:02:44.291 12:15:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721902517 00:02:44.291 12:15:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721902517 00:02:44.291 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721902517_collect-vmstat.pm.log 00:02:44.291 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721902517_collect-cpu-load.pm.log 00:02:44.291 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721902517_collect-cpu-temp.pm.log 00:02:44.291 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721902517_collect-bmc-pm.bmc.pm.log 00:02:45.230 12:15:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:45.230 12:15:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:45.230 12:15:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:45.230 12:15:18 -- common/autotest_common.sh@10 -- # set +x 00:02:45.230 12:15:18 -- spdk/autotest.sh@59 -- # create_test_list 00:02:45.230 12:15:18 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:45.230 12:15:18 -- common/autotest_common.sh@10 -- # set +x 00:02:45.230 12:15:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:45.230 12:15:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.230 12:15:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.230 12:15:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:45.230 12:15:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.230 12:15:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:45.230 12:15:18 -- common/autotest_common.sh@1455 -- # uname 00:02:45.230 12:15:18 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:45.230 12:15:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:45.230 12:15:18 -- common/autotest_common.sh@1475 -- # uname 00:02:45.230 12:15:18 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:45.230 12:15:18 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:45.230 12:15:18 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:45.230 12:15:18 -- spdk/autotest.sh@72 -- # hash lcov 00:02:45.230 12:15:18 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:45.230 12:15:18 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:45.231 --rc lcov_branch_coverage=1 00:02:45.231 --rc lcov_function_coverage=1 00:02:45.231 --rc genhtml_branch_coverage=1 00:02:45.231 --rc genhtml_function_coverage=1 00:02:45.231 --rc genhtml_legend=1 00:02:45.231 --rc geninfo_all_blocks=1 00:02:45.231 ' 00:02:45.231 12:15:18 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:45.231 --rc lcov_branch_coverage=1 00:02:45.231 --rc lcov_function_coverage=1 00:02:45.231 --rc genhtml_branch_coverage=1 00:02:45.231 --rc genhtml_function_coverage=1 00:02:45.231 --rc genhtml_legend=1 00:02:45.231 --rc geninfo_all_blocks=1 00:02:45.231 ' 00:02:45.231 12:15:18 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:45.231 --rc lcov_branch_coverage=1 00:02:45.231 --rc lcov_function_coverage=1 00:02:45.231 --rc genhtml_branch_coverage=1 00:02:45.231 --rc genhtml_function_coverage=1 00:02:45.231 --rc genhtml_legend=1 00:02:45.231 --rc geninfo_all_blocks=1 00:02:45.231 --no-external' 00:02:45.231 12:15:18 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:45.231 --rc lcov_branch_coverage=1 00:02:45.231 --rc lcov_function_coverage=1 00:02:45.231 --rc genhtml_branch_coverage=1 00:02:45.231 --rc genhtml_function_coverage=1 00:02:45.231 --rc genhtml_legend=1 00:02:45.231 --rc geninfo_all_blocks=1 00:02:45.231 --no-external' 00:02:45.231 12:15:18 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:45.231 lcov: LCOV version 1.14 00:02:45.490 12:15:18 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:57.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:57.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:09.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:09.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:09.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:09.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:13.236 12:15:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:13.236 12:15:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:13.236 12:15:46 -- common/autotest_common.sh@10 -- # set +x 00:03:13.236 12:15:46 -- spdk/autotest.sh@91 -- # rm -f 00:03:13.236 12:15:46 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.436 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:17.436 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:17.436 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:17.436 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:17.436 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:17.436 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:17.436 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:17.436 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:17.436 0000:65:00.0 (8086 0a54): Already using the nvme driver 00:03:17.436 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:17.436 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:17.437 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:17.437 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:17.437 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:17.437 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:17.697 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:17.697 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:17.697 12:15:50 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:17.697 12:15:50 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:17.697 12:15:50 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:17.697 12:15:50 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:17.697 12:15:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:17.697 12:15:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:17.697 12:15:50 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:17.697 12:15:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:17.697 12:15:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:17.697 12:15:50 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:17.697 12:15:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:17.697 12:15:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:17.697 12:15:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:17.697 12:15:50 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:17.697 12:15:50 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:17.697 No valid GPT data, bailing 00:03:17.697 12:15:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:17.697 12:15:50 -- scripts/common.sh@391 -- # pt= 00:03:17.697 12:15:50 -- scripts/common.sh@392 -- # return 1 00:03:17.697 12:15:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:17.697 1+0 records in 00:03:17.697 1+0 records out 00:03:17.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514715 s, 204 MB/s 00:03:17.697 12:15:50 -- spdk/autotest.sh@118 -- # sync 00:03:17.697 12:15:50 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:17.697 12:15:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:17.697 12:15:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:25.888 12:15:58 -- spdk/autotest.sh@124 -- # uname -s 00:03:25.888 12:15:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:25.888 12:15:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.888 12:15:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.888 12:15:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.888 12:15:58 -- common/autotest_common.sh@10 -- # set +x 00:03:25.888 ************************************ 00:03:25.888 START TEST setup.sh 00:03:25.888 ************************************ 00:03:25.888 12:15:58 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.888 * Looking for test storage... 00:03:25.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.888 12:15:58 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:25.888 12:15:58 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:25.888 12:15:58 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:25.888 12:15:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.888 12:15:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.888 12:15:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.888 ************************************ 00:03:25.888 START TEST acl 00:03:25.888 ************************************ 00:03:25.888 12:15:58 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:25.888 * Looking for test storage... 00:03:25.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.888 12:15:58 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:25.888 12:15:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:25.888 12:15:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:25.888 12:15:58 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:25.888 12:15:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.888 12:15:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:25.888 12:15:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:25.888 12:15:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.888 12:15:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.888 12:15:58 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:25.888 12:15:58 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:25.888 12:15:58 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:25.888 12:15:58 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:25.888 12:15:58 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:25.888 12:15:58 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.888 12:15:58 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.088 12:16:03 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:30.088 12:16:03 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:30.088 12:16:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.088 12:16:03 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:30.088 12:16:03 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.088 12:16:03 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:34.290 Hugepages 00:03:34.290 node hugesize free / total 00:03:34.290 12:16:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:34.290 12:16:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:34.290 12:16:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 00:03:34.290 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:34.290 12:16:07 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:34.290 12:16:07 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.290 12:16:07 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.290 12:16:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:34.290 ************************************ 00:03:34.290 START TEST denied 00:03:34.290 ************************************ 00:03:34.290 12:16:07 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:34.290 12:16:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:34.290 12:16:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:34.290 12:16:07 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:34.290 12:16:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.290 12:16:07 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.495 0000:65:00.0 (8086 0a54): Skipping denied controller at 0000:65:00.0 00:03:38.495 12:16:11 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:38.495 12:16:11 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:38.495 12:16:11 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:38.495 12:16:11 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:38.495 12:16:11 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:38.495 12:16:11 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:38.495 12:16:11 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:38.495 12:16:11 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:38.495 12:16:11 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.495 12:16:11 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.783 00:03:43.783 real 0m9.406s 00:03:43.783 user 0m3.070s 00:03:43.783 sys 0m5.598s 00:03:43.783 12:16:16 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.783 12:16:16 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:43.783 ************************************ 00:03:43.783 END TEST denied 00:03:43.783 ************************************ 00:03:43.783 12:16:16 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:43.783 12:16:16 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:43.783 12:16:16 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.783 12:16:16 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.783 12:16:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:43.783 ************************************ 00:03:43.783 START TEST allowed 00:03:43.783 ************************************ 00:03:43.783 12:16:16 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:43.783 12:16:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:43.783 12:16:16 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:43.783 12:16:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:43.783 12:16:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.783 12:16:16 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.369 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.369 12:16:22 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:50.369 12:16:22 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:50.369 12:16:22 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:50.369 12:16:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.369 12:16:22 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.670 00:03:53.670 real 0m10.268s 00:03:53.670 user 0m2.967s 00:03:53.670 sys 0m5.495s 00:03:53.670 12:16:27 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.670 12:16:27 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:53.670 ************************************ 00:03:53.670 END TEST allowed 00:03:53.670 ************************************ 00:03:53.670 12:16:27 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:53.670 00:03:53.670 real 0m28.442s 00:03:53.670 user 0m9.241s 00:03:53.670 sys 0m16.899s 00:03:53.670 12:16:27 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.670 12:16:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:53.670 ************************************ 00:03:53.670 END TEST acl 00:03:53.670 ************************************ 00:03:53.932 12:16:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:53.932 12:16:27 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:53.932 12:16:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.932 12:16:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.932 12:16:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.932 ************************************ 00:03:53.932 START TEST hugepages 00:03:53.932 ************************************ 00:03:53.932 12:16:27 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:53.932 * Looking for test storage... 00:03:53.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 108421608 kB' 'MemAvailable: 111640544 kB' 'Buffers: 3736 kB' 'Cached: 9463644 kB' 'SwapCached: 0 kB' 'Active: 6510416 kB' 'Inactive: 3511840 kB' 'Active(anon): 6110652 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557896 kB' 'Mapped: 228968 kB' 'Shmem: 5555776 kB' 'KReclaimable: 263352 kB' 'Slab: 932544 kB' 'SReclaimable: 263352 kB' 'SUnreclaim: 669192 kB' 'KernelStack: 25104 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69463468 kB' 'Committed_AS: 7646812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229788 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.932 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.933 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:53.934 12:16:27 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:53.934 12:16:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.934 12:16:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.934 12:16:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.194 ************************************ 00:03:54.194 START TEST default_setup 00:03:54.194 ************************************ 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.194 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.195 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.195 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.195 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:54.195 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.195 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:54.195 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:54.195 12:16:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:54.195 12:16:27 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.195 12:16:27 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.398 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:58.398 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:00.327 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.327 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110646616 kB' 'MemAvailable: 113865540 kB' 'Buffers: 3736 kB' 'Cached: 9463804 kB' 'SwapCached: 0 kB' 'Active: 6527368 kB' 'Inactive: 3511840 kB' 'Active(anon): 6127604 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575060 kB' 'Mapped: 229036 kB' 'Shmem: 5555936 kB' 'KReclaimable: 263328 kB' 'Slab: 930512 kB' 'SReclaimable: 263328 kB' 'SUnreclaim: 667184 kB' 'KernelStack: 24944 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7667720 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229564 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.328 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.329 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110646388 kB' 'MemAvailable: 113865312 kB' 'Buffers: 3736 kB' 'Cached: 9463808 kB' 'SwapCached: 0 kB' 'Active: 6527056 kB' 'Inactive: 3511840 kB' 'Active(anon): 6127292 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574776 kB' 'Mapped: 228964 kB' 'Shmem: 5555940 kB' 'KReclaimable: 263328 kB' 'Slab: 930588 kB' 'SReclaimable: 263328 kB' 'SUnreclaim: 667260 kB' 'KernelStack: 24976 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7667740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229564 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.330 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.331 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110646388 kB' 'MemAvailable: 113865312 kB' 'Buffers: 3736 kB' 'Cached: 9463808 kB' 'SwapCached: 0 kB' 'Active: 6527104 kB' 'Inactive: 3511840 kB' 'Active(anon): 6127340 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574808 kB' 'Mapped: 228964 kB' 'Shmem: 5555940 kB' 'KReclaimable: 263328 kB' 'Slab: 930564 kB' 'SReclaimable: 263328 kB' 'SUnreclaim: 667236 kB' 'KernelStack: 24992 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7667760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229564 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.332 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.333 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.334 nr_hugepages=1024 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.334 resv_hugepages=0 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.334 surplus_hugepages=0 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.334 anon_hugepages=0 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110645380 kB' 'MemAvailable: 113864304 kB' 'Buffers: 3736 kB' 'Cached: 9463848 kB' 'SwapCached: 0 kB' 'Active: 6527096 kB' 'Inactive: 3511840 kB' 'Active(anon): 6127332 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574780 kB' 'Mapped: 228964 kB' 'Shmem: 5555980 kB' 'KReclaimable: 263328 kB' 'Slab: 930548 kB' 'SReclaimable: 263328 kB' 'SUnreclaim: 667220 kB' 'KernelStack: 24976 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7667784 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229564 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.334 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.335 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 59722628 kB' 'MemUsed: 5939372 kB' 'SwapCached: 0 kB' 'Active: 1757384 kB' 'Inactive: 165176 kB' 'Active(anon): 1576488 kB' 'Inactive(anon): 0 kB' 'Active(file): 180896 kB' 'Inactive(file): 165176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1566940 kB' 'Mapped: 119068 kB' 'AnonPages: 358776 kB' 'Shmem: 1220868 kB' 'KernelStack: 13064 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139588 kB' 'Slab: 460344 kB' 'SReclaimable: 139588 kB' 'SUnreclaim: 320756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.336 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.337 node0=1024 expecting 1024 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.337 00:04:00.337 real 0m6.168s 00:04:00.337 user 0m1.614s 00:04:00.337 sys 0m2.701s 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.337 12:16:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:00.337 ************************************ 00:04:00.337 END TEST default_setup 00:04:00.337 ************************************ 00:04:00.337 12:16:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.337 12:16:33 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:00.337 12:16:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.337 12:16:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.337 12:16:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.337 ************************************ 00:04:00.337 START TEST per_node_1G_alloc 00:04:00.337 ************************************ 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:00.337 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.338 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:00.338 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.338 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:00.338 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:00.338 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:00.338 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:00.338 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:00.338 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.338 12:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.586 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.586 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.586 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.586 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.586 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.586 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.586 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.586 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:04.586 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.586 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.586 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.586 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.587 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.587 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.587 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.587 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.587 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110618072 kB' 'MemAvailable: 113836996 kB' 'Buffers: 3736 kB' 'Cached: 9463956 kB' 'SwapCached: 0 kB' 'Active: 6525900 kB' 'Inactive: 3511840 kB' 'Active(anon): 6126136 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572764 kB' 'Mapped: 228080 kB' 'Shmem: 5556088 kB' 'KReclaimable: 263328 kB' 'Slab: 930532 kB' 'SReclaimable: 263328 kB' 'SUnreclaim: 667204 kB' 'KernelStack: 25072 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7653280 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229916 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.587 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.588 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110622044 kB' 'MemAvailable: 113840968 kB' 'Buffers: 3736 kB' 'Cached: 9463960 kB' 'SwapCached: 0 kB' 'Active: 6526960 kB' 'Inactive: 3511840 kB' 'Active(anon): 6127196 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573820 kB' 'Mapped: 228080 kB' 'Shmem: 5556092 kB' 'KReclaimable: 263328 kB' 'Slab: 930572 kB' 'SReclaimable: 263328 kB' 'SUnreclaim: 667244 kB' 'KernelStack: 25104 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7653300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229868 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.590 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110625984 kB' 'MemAvailable: 113844908 kB' 'Buffers: 3736 kB' 'Cached: 9463976 kB' 'SwapCached: 0 kB' 'Active: 6525924 kB' 'Inactive: 3511840 kB' 'Active(anon): 6126160 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573284 kB' 'Mapped: 227976 kB' 'Shmem: 5556108 kB' 'KReclaimable: 263328 kB' 'Slab: 930528 kB' 'SReclaimable: 263328 kB' 'SUnreclaim: 667200 kB' 'KernelStack: 25152 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7653324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229900 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.591 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.592 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.593 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.594 nr_hugepages=1024 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.594 resv_hugepages=0 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.594 surplus_hugepages=0 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.594 anon_hugepages=0 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110626872 kB' 'MemAvailable: 113845796 kB' 'Buffers: 3736 kB' 'Cached: 9463996 kB' 'SwapCached: 0 kB' 'Active: 6525632 kB' 'Inactive: 3511840 kB' 'Active(anon): 6125868 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572988 kB' 'Mapped: 227976 kB' 'Shmem: 5556128 kB' 'KReclaimable: 263328 kB' 'Slab: 930528 kB' 'SReclaimable: 263328 kB' 'SUnreclaim: 667200 kB' 'KernelStack: 25104 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7651376 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229836 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.594 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.595 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 60763680 kB' 'MemUsed: 4898320 kB' 'SwapCached: 0 kB' 'Active: 1759684 kB' 'Inactive: 165176 kB' 'Active(anon): 1578788 kB' 'Inactive(anon): 0 kB' 'Active(file): 180896 kB' 'Inactive(file): 165176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1567044 kB' 'Mapped: 118516 kB' 'AnonPages: 360456 kB' 'Shmem: 1220972 kB' 'KernelStack: 13128 kB' 'PageTables: 5000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139588 kB' 'Slab: 460364 kB' 'SReclaimable: 139588 kB' 'SUnreclaim: 320776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.596 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.597 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682032 kB' 'MemFree: 49867716 kB' 'MemUsed: 10814316 kB' 'SwapCached: 0 kB' 'Active: 4766500 kB' 'Inactive: 3346664 kB' 'Active(anon): 4547632 kB' 'Inactive(anon): 0 kB' 'Active(file): 218868 kB' 'Inactive(file): 3346664 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7900688 kB' 'Mapped: 109460 kB' 'AnonPages: 212580 kB' 'Shmem: 4335156 kB' 'KernelStack: 11960 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123740 kB' 'Slab: 470276 kB' 'SReclaimable: 123740 kB' 'SUnreclaim: 346536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.598 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:04.599 node0=512 expecting 512 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:04.599 node1=512 expecting 512 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:04.599 00:04:04.599 real 0m4.218s 00:04:04.599 user 0m1.635s 00:04:04.599 sys 0m2.660s 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.599 12:16:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.599 ************************************ 00:04:04.599 END TEST per_node_1G_alloc 00:04:04.599 ************************************ 00:04:04.599 12:16:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:04.599 12:16:37 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:04.599 12:16:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.599 12:16:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.599 12:16:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.599 ************************************ 00:04:04.599 START TEST even_2G_alloc 00:04:04.599 ************************************ 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.599 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:04.600 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:04.600 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:04.600 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.600 12:16:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.810 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:08.810 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110635320 kB' 'MemAvailable: 113854228 kB' 'Buffers: 3736 kB' 'Cached: 9464148 kB' 'SwapCached: 0 kB' 'Active: 6527588 kB' 'Inactive: 3511840 kB' 'Active(anon): 6127824 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574328 kB' 'Mapped: 228244 kB' 'Shmem: 5556280 kB' 'KReclaimable: 263296 kB' 'Slab: 930656 kB' 'SReclaimable: 263296 kB' 'SUnreclaim: 667360 kB' 'KernelStack: 24976 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7651044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229772 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.810 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110636716 kB' 'MemAvailable: 113855624 kB' 'Buffers: 3736 kB' 'Cached: 9464152 kB' 'SwapCached: 0 kB' 'Active: 6527928 kB' 'Inactive: 3511840 kB' 'Active(anon): 6128164 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574680 kB' 'Mapped: 228076 kB' 'Shmem: 5556284 kB' 'KReclaimable: 263296 kB' 'Slab: 930636 kB' 'SReclaimable: 263296 kB' 'SUnreclaim: 667340 kB' 'KernelStack: 24960 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7650932 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229740 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.811 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110637548 kB' 'MemAvailable: 113856456 kB' 'Buffers: 3736 kB' 'Cached: 9464168 kB' 'SwapCached: 0 kB' 'Active: 6529756 kB' 'Inactive: 3511840 kB' 'Active(anon): 6129992 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577496 kB' 'Mapped: 228500 kB' 'Shmem: 5556300 kB' 'KReclaimable: 263296 kB' 'Slab: 930600 kB' 'SReclaimable: 263296 kB' 'SUnreclaim: 667304 kB' 'KernelStack: 24944 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7654680 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.812 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.813 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.814 nr_hugepages=1024 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.814 resv_hugepages=0 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.814 surplus_hugepages=0 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.814 anon_hugepages=0 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110633520 kB' 'MemAvailable: 113852428 kB' 'Buffers: 3736 kB' 'Cached: 9464192 kB' 'SwapCached: 0 kB' 'Active: 6532880 kB' 'Inactive: 3511840 kB' 'Active(anon): 6133116 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580144 kB' 'Mapped: 228776 kB' 'Shmem: 5556324 kB' 'KReclaimable: 263296 kB' 'Slab: 930600 kB' 'SReclaimable: 263296 kB' 'SUnreclaim: 667304 kB' 'KernelStack: 24976 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7657224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229728 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.814 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 60765144 kB' 'MemUsed: 4896856 kB' 'SwapCached: 0 kB' 'Active: 1758744 kB' 'Inactive: 165176 kB' 'Active(anon): 1577848 kB' 'Inactive(anon): 0 kB' 'Active(file): 180896 kB' 'Inactive(file): 165176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1567232 kB' 'Mapped: 118536 kB' 'AnonPages: 359948 kB' 'Shmem: 1221160 kB' 'KernelStack: 13112 kB' 'PageTables: 4884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139556 kB' 'Slab: 460136 kB' 'SReclaimable: 139556 kB' 'SUnreclaim: 320580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.815 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682032 kB' 'MemFree: 49868628 kB' 'MemUsed: 10813404 kB' 'SwapCached: 0 kB' 'Active: 4768528 kB' 'Inactive: 3346664 kB' 'Active(anon): 4549660 kB' 'Inactive(anon): 0 kB' 'Active(file): 218868 kB' 'Inactive(file): 3346664 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7900724 kB' 'Mapped: 109460 kB' 'AnonPages: 214572 kB' 'Shmem: 4335192 kB' 'KernelStack: 11864 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123740 kB' 'Slab: 470464 kB' 'SReclaimable: 123740 kB' 'SUnreclaim: 346724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:08.816 node0=512 expecting 512 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:08.816 node1=512 expecting 512 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:08.816 00:04:08.816 real 0m4.177s 00:04:08.816 user 0m1.626s 00:04:08.816 sys 0m2.630s 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.816 12:16:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.816 ************************************ 00:04:08.816 END TEST even_2G_alloc 00:04:08.817 ************************************ 00:04:08.817 12:16:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:08.817 12:16:42 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:08.817 12:16:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.817 12:16:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.817 12:16:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.817 ************************************ 00:04:08.817 START TEST odd_alloc 00:04:08.817 ************************************ 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.817 12:16:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.027 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:13.027 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110634688 kB' 'MemAvailable: 113853596 kB' 'Buffers: 3736 kB' 'Cached: 9464324 kB' 'SwapCached: 0 kB' 'Active: 6528404 kB' 'Inactive: 3511840 kB' 'Active(anon): 6128640 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574892 kB' 'Mapped: 228092 kB' 'Shmem: 5556456 kB' 'KReclaimable: 263296 kB' 'Slab: 930556 kB' 'SReclaimable: 263296 kB' 'SUnreclaim: 667260 kB' 'KernelStack: 24912 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70511020 kB' 'Committed_AS: 7651788 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.027 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.028 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110636208 kB' 'MemAvailable: 113855116 kB' 'Buffers: 3736 kB' 'Cached: 9464328 kB' 'SwapCached: 0 kB' 'Active: 6528516 kB' 'Inactive: 3511840 kB' 'Active(anon): 6128752 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574976 kB' 'Mapped: 228092 kB' 'Shmem: 5556460 kB' 'KReclaimable: 263296 kB' 'Slab: 930556 kB' 'SReclaimable: 263296 kB' 'SUnreclaim: 667260 kB' 'KernelStack: 24896 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70511020 kB' 'Committed_AS: 7651808 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229676 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.029 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.030 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110636792 kB' 'MemAvailable: 113855700 kB' 'Buffers: 3736 kB' 'Cached: 9464344 kB' 'SwapCached: 0 kB' 'Active: 6528020 kB' 'Inactive: 3511840 kB' 'Active(anon): 6128256 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574988 kB' 'Mapped: 228016 kB' 'Shmem: 5556476 kB' 'KReclaimable: 263296 kB' 'Slab: 930544 kB' 'SReclaimable: 263296 kB' 'SUnreclaim: 667248 kB' 'KernelStack: 24912 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70511020 kB' 'Committed_AS: 7651960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.031 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.032 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:13.033 nr_hugepages=1025 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.033 resv_hugepages=0 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.033 surplus_hugepages=0 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.033 anon_hugepages=0 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110636792 kB' 'MemAvailable: 113855700 kB' 'Buffers: 3736 kB' 'Cached: 9464356 kB' 'SwapCached: 0 kB' 'Active: 6528364 kB' 'Inactive: 3511840 kB' 'Active(anon): 6128600 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575376 kB' 'Mapped: 228016 kB' 'Shmem: 5556488 kB' 'KReclaimable: 263296 kB' 'Slab: 930544 kB' 'SReclaimable: 263296 kB' 'SUnreclaim: 667248 kB' 'KernelStack: 24976 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70511020 kB' 'Committed_AS: 7652348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.034 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 60763512 kB' 'MemUsed: 4898488 kB' 'SwapCached: 0 kB' 'Active: 1760056 kB' 'Inactive: 165176 kB' 'Active(anon): 1579160 kB' 'Inactive(anon): 0 kB' 'Active(file): 180896 kB' 'Inactive(file): 165176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1567412 kB' 'Mapped: 118556 kB' 'AnonPages: 361056 kB' 'Shmem: 1221340 kB' 'KernelStack: 13096 kB' 'PageTables: 4848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139556 kB' 'Slab: 460224 kB' 'SReclaimable: 139556 kB' 'SUnreclaim: 320668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.036 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682032 kB' 'MemFree: 49873616 kB' 'MemUsed: 10808416 kB' 'SwapCached: 0 kB' 'Active: 4768324 kB' 'Inactive: 3346664 kB' 'Active(anon): 4549456 kB' 'Inactive(anon): 0 kB' 'Active(file): 218868 kB' 'Inactive(file): 3346664 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7900724 kB' 'Mapped: 109460 kB' 'AnonPages: 214284 kB' 'Shmem: 4335192 kB' 'KernelStack: 11864 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123740 kB' 'Slab: 470320 kB' 'SReclaimable: 123740 kB' 'SUnreclaim: 346580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:13.038 node0=512 expecting 513 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:13.038 node1=513 expecting 512 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:13.038 00:04:13.038 real 0m4.140s 00:04:13.038 user 0m1.604s 00:04:13.038 sys 0m2.607s 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.038 12:16:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.038 ************************************ 00:04:13.038 END TEST odd_alloc 00:04:13.038 ************************************ 00:04:13.038 12:16:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:13.038 12:16:46 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:13.038 12:16:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.038 12:16:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.038 12:16:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.038 ************************************ 00:04:13.038 START TEST custom_alloc 00:04:13.038 ************************************ 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.038 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.039 12:16:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:17.250 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.250 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 109583052 kB' 'MemAvailable: 112801964 kB' 'Buffers: 3736 kB' 'Cached: 9464500 kB' 'SwapCached: 0 kB' 'Active: 6531844 kB' 'Inactive: 3511840 kB' 'Active(anon): 6132080 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578552 kB' 'Mapped: 228112 kB' 'Shmem: 5556632 kB' 'KReclaimable: 263304 kB' 'Slab: 930636 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667332 kB' 'KernelStack: 24976 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987756 kB' 'Committed_AS: 7653140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229612 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.250 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.251 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 109583564 kB' 'MemAvailable: 112802476 kB' 'Buffers: 3736 kB' 'Cached: 9464500 kB' 'SwapCached: 0 kB' 'Active: 6530960 kB' 'Inactive: 3511840 kB' 'Active(anon): 6131196 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578112 kB' 'Mapped: 228036 kB' 'Shmem: 5556632 kB' 'KReclaimable: 263304 kB' 'Slab: 930632 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667328 kB' 'KernelStack: 24944 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987756 kB' 'Committed_AS: 7653160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229596 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.252 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.253 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 109583564 kB' 'MemAvailable: 112802476 kB' 'Buffers: 3736 kB' 'Cached: 9464500 kB' 'SwapCached: 0 kB' 'Active: 6531052 kB' 'Inactive: 3511840 kB' 'Active(anon): 6131288 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578180 kB' 'Mapped: 228036 kB' 'Shmem: 5556632 kB' 'KReclaimable: 263304 kB' 'Slab: 930632 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667328 kB' 'KernelStack: 24928 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987756 kB' 'Committed_AS: 7653180 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229596 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.254 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.255 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:17.256 nr_hugepages=1536 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.256 resv_hugepages=0 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.256 surplus_hugepages=0 00:04:17.256 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.256 anon_hugepages=0 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 109583856 kB' 'MemAvailable: 112802768 kB' 'Buffers: 3736 kB' 'Cached: 9464544 kB' 'SwapCached: 0 kB' 'Active: 6530580 kB' 'Inactive: 3511840 kB' 'Active(anon): 6130816 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577632 kB' 'Mapped: 228032 kB' 'Shmem: 5556676 kB' 'KReclaimable: 263304 kB' 'Slab: 930632 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667328 kB' 'KernelStack: 24944 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987756 kB' 'Committed_AS: 7653204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229596 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.257 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:17.258 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 60761872 kB' 'MemUsed: 4900128 kB' 'SwapCached: 0 kB' 'Active: 1759892 kB' 'Inactive: 165176 kB' 'Active(anon): 1578996 kB' 'Inactive(anon): 0 kB' 'Active(file): 180896 kB' 'Inactive(file): 165176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1567564 kB' 'Mapped: 118572 kB' 'AnonPages: 360748 kB' 'Shmem: 1221492 kB' 'KernelStack: 13096 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139556 kB' 'Slab: 460312 kB' 'SReclaimable: 139556 kB' 'SUnreclaim: 320756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.259 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682032 kB' 'MemFree: 48821988 kB' 'MemUsed: 11860044 kB' 'SwapCached: 0 kB' 'Active: 4770384 kB' 'Inactive: 3346664 kB' 'Active(anon): 4551516 kB' 'Inactive(anon): 0 kB' 'Active(file): 218868 kB' 'Inactive(file): 3346664 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7900752 kB' 'Mapped: 109460 kB' 'AnonPages: 216548 kB' 'Shmem: 4335220 kB' 'KernelStack: 11848 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123748 kB' 'Slab: 470320 kB' 'SReclaimable: 123748 kB' 'SUnreclaim: 346572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.260 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.261 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:17.262 node0=512 expecting 512 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:17.262 node1=1024 expecting 1024 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:17.262 00:04:17.262 real 0m4.114s 00:04:17.262 user 0m1.625s 00:04:17.262 sys 0m2.558s 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.262 12:16:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.262 ************************************ 00:04:17.262 END TEST custom_alloc 00:04:17.262 ************************************ 00:04:17.262 12:16:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:17.262 12:16:50 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:17.262 12:16:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.262 12:16:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.262 12:16:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.262 ************************************ 00:04:17.262 START TEST no_shrink_alloc 00:04:17.262 ************************************ 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.262 12:16:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.474 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.474 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.474 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110564516 kB' 'MemAvailable: 113783428 kB' 'Buffers: 3736 kB' 'Cached: 9464672 kB' 'SwapCached: 0 kB' 'Active: 6530076 kB' 'Inactive: 3511840 kB' 'Active(anon): 6130312 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576500 kB' 'Mapped: 228132 kB' 'Shmem: 5556804 kB' 'KReclaimable: 263304 kB' 'Slab: 930736 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667432 kB' 'KernelStack: 25024 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7655404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229628 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.475 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110561456 kB' 'MemAvailable: 113780368 kB' 'Buffers: 3736 kB' 'Cached: 9464676 kB' 'SwapCached: 0 kB' 'Active: 6531972 kB' 'Inactive: 3511840 kB' 'Active(anon): 6132208 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578432 kB' 'Mapped: 228132 kB' 'Shmem: 5556808 kB' 'KReclaimable: 263304 kB' 'Slab: 930712 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667408 kB' 'KernelStack: 25024 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7674728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.476 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.477 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.478 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110563384 kB' 'MemAvailable: 113782296 kB' 'Buffers: 3736 kB' 'Cached: 9464692 kB' 'SwapCached: 0 kB' 'Active: 6529544 kB' 'Inactive: 3511840 kB' 'Active(anon): 6129780 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576448 kB' 'Mapped: 228044 kB' 'Shmem: 5556824 kB' 'KReclaimable: 263304 kB' 'Slab: 930688 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667384 kB' 'KernelStack: 24912 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7656692 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229660 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.479 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.480 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.481 nr_hugepages=1024 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.481 resv_hugepages=0 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.481 surplus_hugepages=0 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.481 anon_hugepages=0 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110564400 kB' 'MemAvailable: 113783312 kB' 'Buffers: 3736 kB' 'Cached: 9464716 kB' 'SwapCached: 0 kB' 'Active: 6529040 kB' 'Inactive: 3511840 kB' 'Active(anon): 6129276 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575764 kB' 'Mapped: 228052 kB' 'Shmem: 5556848 kB' 'KReclaimable: 263304 kB' 'Slab: 930688 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667384 kB' 'KernelStack: 24912 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7656848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229644 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.481 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.482 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.483 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 59698356 kB' 'MemUsed: 5963644 kB' 'SwapCached: 0 kB' 'Active: 1761596 kB' 'Inactive: 165176 kB' 'Active(anon): 1580700 kB' 'Inactive(anon): 0 kB' 'Active(file): 180896 kB' 'Inactive(file): 165176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1567648 kB' 'Mapped: 118592 kB' 'AnonPages: 362408 kB' 'Shmem: 1221576 kB' 'KernelStack: 13192 kB' 'PageTables: 5304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139556 kB' 'Slab: 460288 kB' 'SReclaimable: 139556 kB' 'SUnreclaim: 320732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.484 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.485 node0=1024 expecting 1024 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.485 12:16:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.696 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.696 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:25.696 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110595688 kB' 'MemAvailable: 113814600 kB' 'Buffers: 3736 kB' 'Cached: 9464836 kB' 'SwapCached: 0 kB' 'Active: 6531476 kB' 'Inactive: 3511840 kB' 'Active(anon): 6131712 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577556 kB' 'Mapped: 228132 kB' 'Shmem: 5556968 kB' 'KReclaimable: 263304 kB' 'Slab: 930824 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667520 kB' 'KernelStack: 25024 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7657960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229852 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.696 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.697 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110596144 kB' 'MemAvailable: 113815056 kB' 'Buffers: 3736 kB' 'Cached: 9464840 kB' 'SwapCached: 0 kB' 'Active: 6531300 kB' 'Inactive: 3511840 kB' 'Active(anon): 6131536 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577376 kB' 'Mapped: 228128 kB' 'Shmem: 5556972 kB' 'KReclaimable: 263304 kB' 'Slab: 930824 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667520 kB' 'KernelStack: 25056 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7657636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229836 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.698 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.699 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110596288 kB' 'MemAvailable: 113815200 kB' 'Buffers: 3736 kB' 'Cached: 9464860 kB' 'SwapCached: 0 kB' 'Active: 6530712 kB' 'Inactive: 3511840 kB' 'Active(anon): 6130948 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577256 kB' 'Mapped: 228092 kB' 'Shmem: 5556992 kB' 'KReclaimable: 263304 kB' 'Slab: 930856 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667552 kB' 'KernelStack: 25184 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7658164 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229852 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.700 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.701 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.702 nr_hugepages=1024 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.702 resv_hugepages=0 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.702 surplus_hugepages=0 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.702 anon_hugepages=0 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.702 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126344032 kB' 'MemFree: 110598916 kB' 'MemAvailable: 113817828 kB' 'Buffers: 3736 kB' 'Cached: 9464884 kB' 'SwapCached: 0 kB' 'Active: 6529980 kB' 'Inactive: 3511840 kB' 'Active(anon): 6130216 kB' 'Inactive(anon): 0 kB' 'Active(file): 399764 kB' 'Inactive(file): 3511840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576504 kB' 'Mapped: 228084 kB' 'Shmem: 5557016 kB' 'KReclaimable: 263304 kB' 'Slab: 930856 kB' 'SReclaimable: 263304 kB' 'SUnreclaim: 667552 kB' 'KernelStack: 24880 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512044 kB' 'Committed_AS: 7657936 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 229756 kB' 'VmallocChunk: 0 kB' 'Percpu: 94208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2482468 kB' 'DirectMap2M: 17121280 kB' 'DirectMap1G: 116391936 kB' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.703 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.704 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 59705092 kB' 'MemUsed: 5956908 kB' 'SwapCached: 0 kB' 'Active: 1762636 kB' 'Inactive: 165176 kB' 'Active(anon): 1581740 kB' 'Inactive(anon): 0 kB' 'Active(file): 180896 kB' 'Inactive(file): 165176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1567712 kB' 'Mapped: 118624 kB' 'AnonPages: 363288 kB' 'Shmem: 1221640 kB' 'KernelStack: 13256 kB' 'PageTables: 5244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139556 kB' 'Slab: 460484 kB' 'SReclaimable: 139556 kB' 'SUnreclaim: 320928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.705 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.706 node0=1024 expecting 1024 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.706 00:04:25.706 real 0m8.248s 00:04:25.706 user 0m3.254s 00:04:25.706 sys 0m5.145s 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.706 12:16:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.706 ************************************ 00:04:25.706 END TEST no_shrink_alloc 00:04:25.706 ************************************ 00:04:25.706 12:16:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:25.706 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:25.706 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:25.706 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.707 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.707 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.707 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.707 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.707 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.707 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.707 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.707 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.707 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.707 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:25.707 12:16:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:25.707 00:04:25.707 real 0m31.721s 00:04:25.707 user 0m11.618s 00:04:25.707 sys 0m18.739s 00:04:25.707 12:16:58 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.707 12:16:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.707 ************************************ 00:04:25.707 END TEST hugepages 00:04:25.707 ************************************ 00:04:25.707 12:16:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:25.707 12:16:58 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:25.707 12:16:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.707 12:16:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.707 12:16:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.707 ************************************ 00:04:25.707 START TEST driver 00:04:25.707 ************************************ 00:04:25.707 12:16:58 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:25.707 * Looking for test storage... 00:04:25.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.707 12:16:59 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:25.707 12:16:59 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.707 12:16:59 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.996 12:17:04 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:30.996 12:17:04 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.996 12:17:04 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.996 12:17:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:30.996 ************************************ 00:04:30.996 START TEST guess_driver 00:04:30.996 ************************************ 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:30.996 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:31.256 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:31.256 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:31.256 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:31.256 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:31.256 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:31.256 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:31.256 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:31.256 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:31.256 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:31.256 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:31.256 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:31.256 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:31.256 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:31.256 Looking for driver=vfio-pci 00:04:31.256 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.256 12:17:04 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:31.256 12:17:04 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.256 12:17:04 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:35.501 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.501 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.501 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.501 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.501 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.501 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.501 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.501 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.501 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.502 12:17:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.413 12:17:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.413 12:17:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.413 12:17:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.413 12:17:10 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:37.413 12:17:10 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:37.413 12:17:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.413 12:17:10 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.701 00:04:42.701 real 0m11.360s 00:04:42.701 user 0m3.106s 00:04:42.701 sys 0m5.587s 00:04:42.701 12:17:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.701 12:17:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:42.701 ************************************ 00:04:42.701 END TEST guess_driver 00:04:42.701 ************************************ 00:04:42.701 12:17:15 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:42.701 00:04:42.701 real 0m16.840s 00:04:42.701 user 0m4.773s 00:04:42.701 sys 0m8.585s 00:04:42.701 12:17:15 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.701 12:17:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:42.702 ************************************ 00:04:42.702 END TEST driver 00:04:42.702 ************************************ 00:04:42.702 12:17:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:42.702 12:17:15 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:42.702 12:17:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.702 12:17:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.702 12:17:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.702 ************************************ 00:04:42.702 START TEST devices 00:04:42.702 ************************************ 00:04:42.702 12:17:15 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:42.702 * Looking for test storage... 00:04:42.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:42.702 12:17:15 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:42.702 12:17:15 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:42.702 12:17:15 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.702 12:17:15 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:46.906 12:17:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:46.906 12:17:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:46.906 12:17:20 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:46.906 12:17:20 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:46.906 12:17:20 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:46.906 12:17:20 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:46.906 12:17:20 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.906 12:17:20 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:46.906 12:17:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:46.906 12:17:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:46.906 12:17:20 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:47.167 No valid GPT data, bailing 00:04:47.167 12:17:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:47.167 12:17:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:47.167 12:17:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:47.167 12:17:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:47.167 12:17:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:47.167 12:17:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:47.167 12:17:20 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:47.167 12:17:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:47.167 12:17:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:47.167 12:17:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:47.167 12:17:20 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:47.167 12:17:20 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:47.167 12:17:20 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:47.167 12:17:20 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.167 12:17:20 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.167 12:17:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.167 ************************************ 00:04:47.167 START TEST nvme_mount 00:04:47.167 ************************************ 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:47.167 12:17:20 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:48.118 Creating new GPT entries in memory. 00:04:48.118 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.118 other utilities. 00:04:48.118 12:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.118 12:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.118 12:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.118 12:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.118 12:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:49.058 Creating new GPT entries in memory. 00:04:49.058 The operation has completed successfully. 00:04:49.058 12:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:49.058 12:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.058 12:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 195231 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.319 12:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:53.527 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.527 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:53.527 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:53.527 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.527 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.527 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.528 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:53.528 12:17:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.528 12:17:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.528 12:17:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.829 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.091 12:17:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.298 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.298 00:05:01.298 real 0m14.030s 00:05:01.298 user 0m4.218s 00:05:01.298 sys 0m7.674s 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.298 12:17:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:01.298 ************************************ 00:05:01.298 END TEST nvme_mount 00:05:01.298 ************************************ 00:05:01.298 12:17:34 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:01.299 12:17:34 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:01.299 12:17:34 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.299 12:17:34 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.299 12:17:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.299 ************************************ 00:05:01.299 START TEST dm_mount 00:05:01.299 ************************************ 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:01.299 12:17:34 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:02.238 Creating new GPT entries in memory. 00:05:02.238 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:02.238 other utilities. 00:05:02.238 12:17:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:02.238 12:17:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.238 12:17:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.238 12:17:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.238 12:17:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:03.178 Creating new GPT entries in memory. 00:05:03.178 The operation has completed successfully. 00:05:03.178 12:17:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:03.178 12:17:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.178 12:17:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.178 12:17:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.178 12:17:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:04.561 The operation has completed successfully. 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 200370 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:04.561 12:17:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:04.562 12:17:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.562 12:17:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.858 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.118 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:08.118 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:08.118 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:08.118 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.118 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:08.118 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.119 12:17:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:12.405 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:12.405 00:05:12.405 real 0m10.904s 00:05:12.405 user 0m2.727s 00:05:12.405 sys 0m5.237s 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.405 12:17:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:12.405 ************************************ 00:05:12.405 END TEST dm_mount 00:05:12.405 ************************************ 00:05:12.405 12:17:45 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:12.405 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:12.405 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:12.405 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:12.405 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.405 12:17:45 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:12.405 00:05:12.405 real 0m29.892s 00:05:12.405 user 0m8.737s 00:05:12.405 sys 0m15.966s 00:05:12.405 12:17:45 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.405 12:17:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:12.405 ************************************ 00:05:12.405 END TEST devices 00:05:12.405 ************************************ 00:05:12.405 12:17:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:12.405 00:05:12.405 real 1m47.340s 00:05:12.405 user 0m34.550s 00:05:12.405 sys 1m0.479s 00:05:12.405 12:17:45 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.405 12:17:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:12.405 ************************************ 00:05:12.405 END TEST setup.sh 00:05:12.405 ************************************ 00:05:12.667 12:17:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.667 12:17:45 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:16.872 Hugepages 00:05:16.872 node hugesize free / total 00:05:16.872 node0 1048576kB 0 / 0 00:05:16.872 node0 2048kB 2048 / 2048 00:05:16.872 node1 1048576kB 0 / 0 00:05:16.872 node1 2048kB 0 / 0 00:05:16.872 00:05:16.872 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:16.872 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:16.872 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:16.872 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:16.872 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:16.872 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:16.872 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:16.872 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:16.872 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:16.872 NVMe 0000:65:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:16.872 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:16.872 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:16.872 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:16.872 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:16.872 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:16.872 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:16.872 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:16.872 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:16.872 12:17:49 -- spdk/autotest.sh@130 -- # uname -s 00:05:16.872 12:17:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:16.872 12:17:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:16.872 12:17:49 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:21.074 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:21.075 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:22.458 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:05:22.458 12:17:55 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:23.842 12:17:56 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:23.842 12:17:56 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:23.842 12:17:56 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:23.842 12:17:56 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:23.842 12:17:56 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:23.842 12:17:56 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:23.842 12:17:56 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.842 12:17:56 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:23.842 12:17:56 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:23.843 12:17:56 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:23.843 12:17:56 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:23.843 12:17:56 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:28.050 Waiting for block devices as requested 00:05:28.050 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:28.050 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:28.050 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:28.050 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:28.050 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:28.050 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:28.050 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:28.050 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:28.050 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:05:28.311 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:28.311 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:28.572 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:28.572 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:28.572 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:28.834 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:28.834 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:28.834 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:28.834 12:18:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:28.834 12:18:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:28.834 12:18:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:28.834 12:18:02 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:28.834 12:18:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:28.834 12:18:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:28.834 12:18:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:28.834 12:18:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:28.834 12:18:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:28.834 12:18:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:28.834 12:18:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:28.834 12:18:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:28.834 12:18:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:29.095 12:18:02 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:29.095 12:18:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:29.095 12:18:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:29.095 12:18:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:29.095 12:18:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:29.095 12:18:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:29.095 12:18:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:29.095 12:18:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:29.095 12:18:02 -- common/autotest_common.sh@1557 -- # continue 00:05:29.095 12:18:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:29.095 12:18:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.095 12:18:02 -- common/autotest_common.sh@10 -- # set +x 00:05:29.095 12:18:02 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:29.095 12:18:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.095 12:18:02 -- common/autotest_common.sh@10 -- # set +x 00:05:29.095 12:18:02 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:33.295 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:33.295 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:34.678 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:05:34.938 12:18:08 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:34.938 12:18:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.938 12:18:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.938 12:18:08 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:34.938 12:18:08 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:34.938 12:18:08 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:34.938 12:18:08 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:34.938 12:18:08 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:34.938 12:18:08 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:34.938 12:18:08 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:34.938 12:18:08 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:34.938 12:18:08 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.938 12:18:08 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:34.938 12:18:08 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:34.938 12:18:08 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:34.938 12:18:08 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:34.938 12:18:08 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:34.938 12:18:08 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:34.938 12:18:08 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:34.938 12:18:08 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:34.938 12:18:08 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:34.938 12:18:08 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:65:00.0 00:05:34.938 12:18:08 -- common/autotest_common.sh@1592 -- # [[ -z 0000:65:00.0 ]] 00:05:34.938 12:18:08 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=211681 00:05:34.938 12:18:08 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.938 12:18:08 -- common/autotest_common.sh@1598 -- # waitforlisten 211681 00:05:34.938 12:18:08 -- common/autotest_common.sh@829 -- # '[' -z 211681 ']' 00:05:34.938 12:18:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.938 12:18:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.938 12:18:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.938 12:18:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.938 12:18:08 -- common/autotest_common.sh@10 -- # set +x 00:05:35.199 [2024-07-25 12:18:08.398798] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:05:35.199 [2024-07-25 12:18:08.398862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211681 ] 00:05:35.199 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.199 [2024-07-25 12:18:08.484401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.199 [2024-07-25 12:18:08.578271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.139 12:18:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.139 12:18:09 -- common/autotest_common.sh@862 -- # return 0 00:05:36.139 12:18:09 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:36.139 12:18:09 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:36.139 12:18:09 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:65:00.0 00:05:39.435 nvme0n1 00:05:39.435 12:18:12 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:39.435 [2024-07-25 12:18:12.483113] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:39.435 request: 00:05:39.435 { 00:05:39.435 "nvme_ctrlr_name": "nvme0", 00:05:39.435 "password": "test", 00:05:39.435 "method": "bdev_nvme_opal_revert", 00:05:39.435 "req_id": 1 00:05:39.435 } 00:05:39.435 Got JSON-RPC error response 00:05:39.435 response: 00:05:39.435 { 00:05:39.435 "code": -32602, 00:05:39.435 "message": "Invalid parameters" 00:05:39.435 } 00:05:39.435 12:18:12 -- common/autotest_common.sh@1604 -- # true 00:05:39.435 12:18:12 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:39.435 12:18:12 -- common/autotest_common.sh@1608 -- # killprocess 211681 00:05:39.435 12:18:12 -- common/autotest_common.sh@948 -- # '[' -z 211681 ']' 00:05:39.435 12:18:12 -- common/autotest_common.sh@952 -- # kill -0 211681 00:05:39.435 12:18:12 -- common/autotest_common.sh@953 -- # uname 00:05:39.435 12:18:12 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.435 12:18:12 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 211681 00:05:39.435 12:18:12 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.435 12:18:12 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.435 12:18:12 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 211681' 00:05:39.435 killing process with pid 211681 00:05:39.435 12:18:12 -- common/autotest_common.sh@967 -- # kill 211681 00:05:39.435 12:18:12 -- common/autotest_common.sh@972 -- # wait 211681 00:05:42.012 12:18:15 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:42.012 12:18:15 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:42.012 12:18:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:42.012 12:18:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:42.012 12:18:15 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:42.012 12:18:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.012 12:18:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.012 12:18:15 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:42.012 12:18:15 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:42.012 12:18:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.012 12:18:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.012 12:18:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.012 ************************************ 00:05:42.012 START TEST env 00:05:42.012 ************************************ 00:05:42.012 12:18:15 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:42.012 * Looking for test storage... 00:05:42.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:42.012 12:18:15 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:42.012 12:18:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.012 12:18:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.012 12:18:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.012 ************************************ 00:05:42.012 START TEST env_memory 00:05:42.012 ************************************ 00:05:42.012 12:18:15 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:42.012 00:05:42.012 00:05:42.012 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.012 http://cunit.sourceforge.net/ 00:05:42.012 00:05:42.012 00:05:42.012 Suite: memory 00:05:42.012 Test: alloc and free memory map ...[2024-07-25 12:18:15.262456] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:42.012 passed 00:05:42.012 Test: mem map translation ...[2024-07-25 12:18:15.286174] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:42.012 [2024-07-25 12:18:15.286202] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:42.012 [2024-07-25 12:18:15.286247] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:42.012 [2024-07-25 12:18:15.286256] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:42.012 passed 00:05:42.012 Test: mem map registration ...[2024-07-25 12:18:15.337264] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:42.012 [2024-07-25 12:18:15.337285] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:42.012 passed 00:05:42.012 Test: mem map adjacent registrations ...passed 00:05:42.012 00:05:42.012 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.012 suites 1 1 n/a 0 0 00:05:42.012 tests 4 4 4 0 0 00:05:42.012 asserts 152 152 152 0 n/a 00:05:42.012 00:05:42.012 Elapsed time = 0.182 seconds 00:05:42.012 00:05:42.012 real 0m0.196s 00:05:42.012 user 0m0.186s 00:05:42.012 sys 0m0.009s 00:05:42.012 12:18:15 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.012 12:18:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:42.012 ************************************ 00:05:42.012 END TEST env_memory 00:05:42.012 ************************************ 00:05:42.273 12:18:15 env -- common/autotest_common.sh@1142 -- # return 0 00:05:42.273 12:18:15 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:42.273 12:18:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.273 12:18:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.273 12:18:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.273 ************************************ 00:05:42.273 START TEST env_vtophys 00:05:42.273 ************************************ 00:05:42.273 12:18:15 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:42.273 EAL: lib.eal log level changed from notice to debug 00:05:42.273 EAL: Detected lcore 0 as core 0 on socket 0 00:05:42.273 EAL: Detected lcore 1 as core 1 on socket 0 00:05:42.273 EAL: Detected lcore 2 as core 2 on socket 0 00:05:42.273 EAL: Detected lcore 3 as core 3 on socket 0 00:05:42.273 EAL: Detected lcore 4 as core 4 on socket 0 00:05:42.273 EAL: Detected lcore 5 as core 5 on socket 0 00:05:42.273 EAL: Detected lcore 6 as core 6 on socket 0 00:05:42.273 EAL: Detected lcore 7 as core 7 on socket 0 00:05:42.273 EAL: Detected lcore 8 as core 8 on socket 0 00:05:42.273 EAL: Detected lcore 9 as core 9 on socket 0 00:05:42.273 EAL: Detected lcore 10 as core 10 on socket 0 00:05:42.273 EAL: Detected lcore 11 as core 11 on socket 0 00:05:42.273 EAL: Detected lcore 12 as core 12 on socket 0 00:05:42.273 EAL: Detected lcore 13 as core 13 on socket 0 00:05:42.273 EAL: Detected lcore 14 as core 14 on socket 0 00:05:42.273 EAL: Detected lcore 15 as core 15 on socket 0 00:05:42.273 EAL: Detected lcore 16 as core 16 on socket 0 00:05:42.273 EAL: Detected lcore 17 as core 17 on socket 0 00:05:42.273 EAL: Detected lcore 18 as core 18 on socket 0 00:05:42.273 EAL: Detected lcore 19 as core 19 on socket 0 00:05:42.273 EAL: Detected lcore 20 as core 20 on socket 0 00:05:42.273 EAL: Detected lcore 21 as core 21 on socket 0 00:05:42.273 EAL: Detected lcore 22 as core 22 on socket 0 00:05:42.273 EAL: Detected lcore 23 as core 23 on socket 0 00:05:42.273 EAL: Detected lcore 24 as core 24 on socket 0 00:05:42.273 EAL: Detected lcore 25 as core 25 on socket 0 00:05:42.273 EAL: Detected lcore 26 as core 26 on socket 0 00:05:42.273 EAL: Detected lcore 27 as core 27 on socket 0 00:05:42.273 EAL: Detected lcore 28 as core 28 on socket 0 00:05:42.273 EAL: Detected lcore 29 as core 29 on socket 0 00:05:42.273 EAL: Detected lcore 30 as core 30 on socket 0 00:05:42.273 EAL: Detected lcore 31 as core 31 on socket 0 00:05:42.273 EAL: Detected lcore 32 as core 0 on socket 1 00:05:42.273 EAL: Detected lcore 33 as core 1 on socket 1 00:05:42.273 EAL: Detected lcore 34 as core 2 on socket 1 00:05:42.273 EAL: Detected lcore 35 as core 3 on socket 1 00:05:42.273 EAL: Detected lcore 36 as core 4 on socket 1 00:05:42.273 EAL: Detected lcore 37 as core 5 on socket 1 00:05:42.273 EAL: Detected lcore 38 as core 6 on socket 1 00:05:42.273 EAL: Detected lcore 39 as core 7 on socket 1 00:05:42.273 EAL: Detected lcore 40 as core 8 on socket 1 00:05:42.273 EAL: Detected lcore 41 as core 9 on socket 1 00:05:42.273 EAL: Detected lcore 42 as core 10 on socket 1 00:05:42.273 EAL: Detected lcore 43 as core 11 on socket 1 00:05:42.273 EAL: Detected lcore 44 as core 12 on socket 1 00:05:42.273 EAL: Detected lcore 45 as core 13 on socket 1 00:05:42.273 EAL: Detected lcore 46 as core 14 on socket 1 00:05:42.273 EAL: Detected lcore 47 as core 15 on socket 1 00:05:42.273 EAL: Detected lcore 48 as core 16 on socket 1 00:05:42.273 EAL: Detected lcore 49 as core 17 on socket 1 00:05:42.273 EAL: Detected lcore 50 as core 18 on socket 1 00:05:42.273 EAL: Detected lcore 51 as core 19 on socket 1 00:05:42.273 EAL: Detected lcore 52 as core 20 on socket 1 00:05:42.273 EAL: Detected lcore 53 as core 21 on socket 1 00:05:42.273 EAL: Detected lcore 54 as core 22 on socket 1 00:05:42.273 EAL: Detected lcore 55 as core 23 on socket 1 00:05:42.273 EAL: Detected lcore 56 as core 24 on socket 1 00:05:42.273 EAL: Detected lcore 57 as core 25 on socket 1 00:05:42.273 EAL: Detected lcore 58 as core 26 on socket 1 00:05:42.273 EAL: Detected lcore 59 as core 27 on socket 1 00:05:42.273 EAL: Detected lcore 60 as core 28 on socket 1 00:05:42.273 EAL: Detected lcore 61 as core 29 on socket 1 00:05:42.273 EAL: Detected lcore 62 as core 30 on socket 1 00:05:42.273 EAL: Detected lcore 63 as core 31 on socket 1 00:05:42.273 EAL: Detected lcore 64 as core 0 on socket 0 00:05:42.273 EAL: Detected lcore 65 as core 1 on socket 0 00:05:42.274 EAL: Detected lcore 66 as core 2 on socket 0 00:05:42.274 EAL: Detected lcore 67 as core 3 on socket 0 00:05:42.274 EAL: Detected lcore 68 as core 4 on socket 0 00:05:42.274 EAL: Detected lcore 69 as core 5 on socket 0 00:05:42.274 EAL: Detected lcore 70 as core 6 on socket 0 00:05:42.274 EAL: Detected lcore 71 as core 7 on socket 0 00:05:42.274 EAL: Detected lcore 72 as core 8 on socket 0 00:05:42.274 EAL: Detected lcore 73 as core 9 on socket 0 00:05:42.274 EAL: Detected lcore 74 as core 10 on socket 0 00:05:42.274 EAL: Detected lcore 75 as core 11 on socket 0 00:05:42.274 EAL: Detected lcore 76 as core 12 on socket 0 00:05:42.274 EAL: Detected lcore 77 as core 13 on socket 0 00:05:42.274 EAL: Detected lcore 78 as core 14 on socket 0 00:05:42.274 EAL: Detected lcore 79 as core 15 on socket 0 00:05:42.274 EAL: Detected lcore 80 as core 16 on socket 0 00:05:42.274 EAL: Detected lcore 81 as core 17 on socket 0 00:05:42.274 EAL: Detected lcore 82 as core 18 on socket 0 00:05:42.274 EAL: Detected lcore 83 as core 19 on socket 0 00:05:42.274 EAL: Detected lcore 84 as core 20 on socket 0 00:05:42.274 EAL: Detected lcore 85 as core 21 on socket 0 00:05:42.274 EAL: Detected lcore 86 as core 22 on socket 0 00:05:42.274 EAL: Detected lcore 87 as core 23 on socket 0 00:05:42.274 EAL: Detected lcore 88 as core 24 on socket 0 00:05:42.274 EAL: Detected lcore 89 as core 25 on socket 0 00:05:42.274 EAL: Detected lcore 90 as core 26 on socket 0 00:05:42.274 EAL: Detected lcore 91 as core 27 on socket 0 00:05:42.274 EAL: Detected lcore 92 as core 28 on socket 0 00:05:42.274 EAL: Detected lcore 93 as core 29 on socket 0 00:05:42.274 EAL: Detected lcore 94 as core 30 on socket 0 00:05:42.274 EAL: Detected lcore 95 as core 31 on socket 0 00:05:42.274 EAL: Detected lcore 96 as core 0 on socket 1 00:05:42.274 EAL: Detected lcore 97 as core 1 on socket 1 00:05:42.274 EAL: Detected lcore 98 as core 2 on socket 1 00:05:42.274 EAL: Detected lcore 99 as core 3 on socket 1 00:05:42.274 EAL: Detected lcore 100 as core 4 on socket 1 00:05:42.274 EAL: Detected lcore 101 as core 5 on socket 1 00:05:42.274 EAL: Detected lcore 102 as core 6 on socket 1 00:05:42.274 EAL: Detected lcore 103 as core 7 on socket 1 00:05:42.274 EAL: Detected lcore 104 as core 8 on socket 1 00:05:42.274 EAL: Detected lcore 105 as core 9 on socket 1 00:05:42.274 EAL: Detected lcore 106 as core 10 on socket 1 00:05:42.274 EAL: Detected lcore 107 as core 11 on socket 1 00:05:42.274 EAL: Detected lcore 108 as core 12 on socket 1 00:05:42.274 EAL: Detected lcore 109 as core 13 on socket 1 00:05:42.274 EAL: Detected lcore 110 as core 14 on socket 1 00:05:42.274 EAL: Detected lcore 111 as core 15 on socket 1 00:05:42.274 EAL: Detected lcore 112 as core 16 on socket 1 00:05:42.274 EAL: Detected lcore 113 as core 17 on socket 1 00:05:42.274 EAL: Detected lcore 114 as core 18 on socket 1 00:05:42.274 EAL: Detected lcore 115 as core 19 on socket 1 00:05:42.274 EAL: Detected lcore 116 as core 20 on socket 1 00:05:42.274 EAL: Detected lcore 117 as core 21 on socket 1 00:05:42.274 EAL: Detected lcore 118 as core 22 on socket 1 00:05:42.274 EAL: Detected lcore 119 as core 23 on socket 1 00:05:42.274 EAL: Detected lcore 120 as core 24 on socket 1 00:05:42.274 EAL: Detected lcore 121 as core 25 on socket 1 00:05:42.274 EAL: Detected lcore 122 as core 26 on socket 1 00:05:42.274 EAL: Detected lcore 123 as core 27 on socket 1 00:05:42.274 EAL: Detected lcore 124 as core 28 on socket 1 00:05:42.274 EAL: Detected lcore 125 as core 29 on socket 1 00:05:42.274 EAL: Detected lcore 126 as core 30 on socket 1 00:05:42.274 EAL: Detected lcore 127 as core 31 on socket 1 00:05:42.274 EAL: Maximum logical cores by configuration: 128 00:05:42.274 EAL: Detected CPU lcores: 128 00:05:42.274 EAL: Detected NUMA nodes: 2 00:05:42.274 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:42.274 EAL: Detected shared linkage of DPDK 00:05:42.274 EAL: No shared files mode enabled, IPC will be disabled 00:05:42.274 EAL: Bus pci wants IOVA as 'DC' 00:05:42.274 EAL: Buses did not request a specific IOVA mode. 00:05:42.274 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:42.274 EAL: Selected IOVA mode 'VA' 00:05:42.274 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.274 EAL: Probing VFIO support... 00:05:42.274 EAL: IOMMU type 1 (Type 1) is supported 00:05:42.274 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:42.274 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:42.274 EAL: VFIO support initialized 00:05:42.274 EAL: Ask a virtual area of 0x2e000 bytes 00:05:42.274 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:42.274 EAL: Setting up physically contiguous memory... 00:05:42.274 EAL: Setting maximum number of open files to 524288 00:05:42.274 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:42.274 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:42.274 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:42.274 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.274 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:42.274 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.274 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.274 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:42.274 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:42.274 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.274 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:42.274 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.274 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.274 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:42.274 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:42.274 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.274 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:42.274 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.274 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.274 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:42.274 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:42.274 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.274 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:42.274 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.274 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.274 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:42.274 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:42.274 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:42.274 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.274 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:42.274 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.274 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.274 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:42.274 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:42.274 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.274 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:42.274 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.274 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.274 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:42.274 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:42.274 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.274 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:42.274 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.274 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.274 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:42.274 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:42.274 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.274 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:42.274 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.274 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.274 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:42.274 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:42.274 EAL: Hugepages will be freed exactly as allocated. 00:05:42.274 EAL: No shared files mode enabled, IPC is disabled 00:05:42.274 EAL: No shared files mode enabled, IPC is disabled 00:05:42.274 EAL: TSC frequency is ~2600000 KHz 00:05:42.274 EAL: Main lcore 0 is ready (tid=7fae1bd44a00;cpuset=[0]) 00:05:42.274 EAL: Trying to obtain current memory policy. 00:05:42.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.274 EAL: Restoring previous memory policy: 0 00:05:42.274 EAL: request: mp_malloc_sync 00:05:42.274 EAL: No shared files mode enabled, IPC is disabled 00:05:42.274 EAL: Heap on socket 0 was expanded by 2MB 00:05:42.274 EAL: No shared files mode enabled, IPC is disabled 00:05:42.274 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:42.274 EAL: Mem event callback 'spdk:(nil)' registered 00:05:42.274 00:05:42.274 00:05:42.274 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.274 http://cunit.sourceforge.net/ 00:05:42.274 00:05:42.274 00:05:42.274 Suite: components_suite 00:05:42.274 Test: vtophys_malloc_test ...passed 00:05:42.274 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:42.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.274 EAL: Restoring previous memory policy: 4 00:05:42.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.274 EAL: request: mp_malloc_sync 00:05:42.274 EAL: No shared files mode enabled, IPC is disabled 00:05:42.274 EAL: Heap on socket 0 was expanded by 4MB 00:05:42.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.274 EAL: request: mp_malloc_sync 00:05:42.274 EAL: No shared files mode enabled, IPC is disabled 00:05:42.274 EAL: Heap on socket 0 was shrunk by 4MB 00:05:42.274 EAL: Trying to obtain current memory policy. 00:05:42.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.274 EAL: Restoring previous memory policy: 4 00:05:42.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was expanded by 6MB 00:05:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was shrunk by 6MB 00:05:42.275 EAL: Trying to obtain current memory policy. 00:05:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.275 EAL: Restoring previous memory policy: 4 00:05:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was expanded by 10MB 00:05:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was shrunk by 10MB 00:05:42.275 EAL: Trying to obtain current memory policy. 00:05:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.275 EAL: Restoring previous memory policy: 4 00:05:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was expanded by 18MB 00:05:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was shrunk by 18MB 00:05:42.275 EAL: Trying to obtain current memory policy. 00:05:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.275 EAL: Restoring previous memory policy: 4 00:05:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was expanded by 34MB 00:05:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was shrunk by 34MB 00:05:42.275 EAL: Trying to obtain current memory policy. 00:05:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.275 EAL: Restoring previous memory policy: 4 00:05:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was expanded by 66MB 00:05:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was shrunk by 66MB 00:05:42.275 EAL: Trying to obtain current memory policy. 00:05:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.275 EAL: Restoring previous memory policy: 4 00:05:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was expanded by 130MB 00:05:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.275 EAL: request: mp_malloc_sync 00:05:42.275 EAL: No shared files mode enabled, IPC is disabled 00:05:42.275 EAL: Heap on socket 0 was shrunk by 130MB 00:05:42.275 EAL: Trying to obtain current memory policy. 00:05:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.535 EAL: Restoring previous memory policy: 4 00:05:42.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.535 EAL: request: mp_malloc_sync 00:05:42.535 EAL: No shared files mode enabled, IPC is disabled 00:05:42.535 EAL: Heap on socket 0 was expanded by 258MB 00:05:42.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.535 EAL: request: mp_malloc_sync 00:05:42.535 EAL: No shared files mode enabled, IPC is disabled 00:05:42.535 EAL: Heap on socket 0 was shrunk by 258MB 00:05:42.535 EAL: Trying to obtain current memory policy. 00:05:42.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.535 EAL: Restoring previous memory policy: 4 00:05:42.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.535 EAL: request: mp_malloc_sync 00:05:42.535 EAL: No shared files mode enabled, IPC is disabled 00:05:42.535 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.535 EAL: request: mp_malloc_sync 00:05:42.535 EAL: No shared files mode enabled, IPC is disabled 00:05:42.535 EAL: Heap on socket 0 was shrunk by 514MB 00:05:42.535 EAL: Trying to obtain current memory policy. 00:05:42.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.794 EAL: Restoring previous memory policy: 4 00:05:42.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.794 EAL: request: mp_malloc_sync 00:05:42.794 EAL: No shared files mode enabled, IPC is disabled 00:05:42.794 EAL: Heap on socket 0 was expanded by 1026MB 00:05:42.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.054 EAL: request: mp_malloc_sync 00:05:43.054 EAL: No shared files mode enabled, IPC is disabled 00:05:43.054 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:43.054 passed 00:05:43.054 00:05:43.054 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.054 suites 1 1 n/a 0 0 00:05:43.054 tests 2 2 2 0 0 00:05:43.054 asserts 497 497 497 0 n/a 00:05:43.054 00:05:43.054 Elapsed time = 0.631 seconds 00:05:43.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.054 EAL: request: mp_malloc_sync 00:05:43.054 EAL: No shared files mode enabled, IPC is disabled 00:05:43.054 EAL: Heap on socket 0 was shrunk by 2MB 00:05:43.054 EAL: No shared files mode enabled, IPC is disabled 00:05:43.054 EAL: No shared files mode enabled, IPC is disabled 00:05:43.054 EAL: No shared files mode enabled, IPC is disabled 00:05:43.054 00:05:43.054 real 0m0.767s 00:05:43.054 user 0m0.401s 00:05:43.054 sys 0m0.343s 00:05:43.054 12:18:16 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.054 12:18:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:43.054 ************************************ 00:05:43.054 END TEST env_vtophys 00:05:43.054 ************************************ 00:05:43.054 12:18:16 env -- common/autotest_common.sh@1142 -- # return 0 00:05:43.054 12:18:16 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:43.054 12:18:16 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.054 12:18:16 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.054 12:18:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.054 ************************************ 00:05:43.054 START TEST env_pci 00:05:43.054 ************************************ 00:05:43.054 12:18:16 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:43.054 00:05:43.054 00:05:43.054 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.054 http://cunit.sourceforge.net/ 00:05:43.054 00:05:43.054 00:05:43.054 Suite: pci 00:05:43.054 Test: pci_hook ...[2024-07-25 12:18:16.342210] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 213710 has claimed it 00:05:43.054 EAL: Cannot find device (10000:00:01.0) 00:05:43.054 EAL: Failed to attach device on primary process 00:05:43.054 passed 00:05:43.054 00:05:43.054 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.054 suites 1 1 n/a 0 0 00:05:43.054 tests 1 1 1 0 0 00:05:43.054 asserts 25 25 25 0 n/a 00:05:43.054 00:05:43.054 Elapsed time = 0.035 seconds 00:05:43.054 00:05:43.054 real 0m0.056s 00:05:43.054 user 0m0.017s 00:05:43.054 sys 0m0.039s 00:05:43.054 12:18:16 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.054 12:18:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:43.054 ************************************ 00:05:43.054 END TEST env_pci 00:05:43.054 ************************************ 00:05:43.054 12:18:16 env -- common/autotest_common.sh@1142 -- # return 0 00:05:43.054 12:18:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:43.054 12:18:16 env -- env/env.sh@15 -- # uname 00:05:43.054 12:18:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:43.054 12:18:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:43.054 12:18:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.054 12:18:16 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:43.054 12:18:16 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.054 12:18:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.054 ************************************ 00:05:43.054 START TEST env_dpdk_post_init 00:05:43.054 ************************************ 00:05:43.054 12:18:16 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.313 EAL: Detected CPU lcores: 128 00:05:43.313 EAL: Detected NUMA nodes: 2 00:05:43.313 EAL: Detected shared linkage of DPDK 00:05:43.313 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.313 EAL: Selected IOVA mode 'VA' 00:05:43.313 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.313 EAL: VFIO support initialized 00:05:43.313 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.313 EAL: Using IOMMU type 1 (Type 1) 00:05:43.313 EAL: Ignore mapping IO port bar(1) 00:05:43.573 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:43.573 EAL: Ignore mapping IO port bar(1) 00:05:43.832 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:43.832 EAL: Ignore mapping IO port bar(1) 00:05:44.092 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:44.092 EAL: Ignore mapping IO port bar(1) 00:05:44.092 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:44.351 EAL: Ignore mapping IO port bar(1) 00:05:44.351 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:44.611 EAL: Ignore mapping IO port bar(1) 00:05:44.611 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:44.870 EAL: Ignore mapping IO port bar(1) 00:05:44.870 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:45.130 EAL: Ignore mapping IO port bar(1) 00:05:45.130 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:46.069 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:65:00.0 (socket 0) 00:05:46.069 EAL: Ignore mapping IO port bar(1) 00:05:46.069 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:46.069 EAL: Ignore mapping IO port bar(1) 00:05:46.329 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:46.329 EAL: Ignore mapping IO port bar(1) 00:05:46.589 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:46.589 EAL: Ignore mapping IO port bar(1) 00:05:46.848 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:46.848 EAL: Ignore mapping IO port bar(1) 00:05:46.848 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:47.108 EAL: Ignore mapping IO port bar(1) 00:05:47.108 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:47.367 EAL: Ignore mapping IO port bar(1) 00:05:47.367 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:47.627 EAL: Ignore mapping IO port bar(1) 00:05:47.627 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:51.824 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:51.824 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:51.824 Starting DPDK initialization... 00:05:51.824 Starting SPDK post initialization... 00:05:51.824 SPDK NVMe probe 00:05:51.824 Attaching to 0000:65:00.0 00:05:51.824 Attached to 0000:65:00.0 00:05:51.824 Cleaning up... 00:05:53.734 00:05:53.734 real 0m10.398s 00:05:53.734 user 0m4.263s 00:05:53.734 sys 0m0.157s 00:05:53.734 12:18:26 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.734 12:18:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.734 ************************************ 00:05:53.734 END TEST env_dpdk_post_init 00:05:53.734 ************************************ 00:05:53.734 12:18:26 env -- common/autotest_common.sh@1142 -- # return 0 00:05:53.734 12:18:26 env -- env/env.sh@26 -- # uname 00:05:53.734 12:18:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:53.734 12:18:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.734 12:18:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.734 12:18:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.734 12:18:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.734 ************************************ 00:05:53.734 START TEST env_mem_callbacks 00:05:53.734 ************************************ 00:05:53.734 12:18:26 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.734 EAL: Detected CPU lcores: 128 00:05:53.734 EAL: Detected NUMA nodes: 2 00:05:53.734 EAL: Detected shared linkage of DPDK 00:05:53.734 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.734 EAL: Selected IOVA mode 'VA' 00:05:53.734 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.734 EAL: VFIO support initialized 00:05:53.734 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.734 00:05:53.734 00:05:53.734 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.734 http://cunit.sourceforge.net/ 00:05:53.734 00:05:53.734 00:05:53.734 Suite: memory 00:05:53.734 Test: test ... 00:05:53.734 register 0x200000200000 2097152 00:05:53.734 malloc 3145728 00:05:53.734 register 0x200000400000 4194304 00:05:53.734 buf 0x200000500000 len 3145728 PASSED 00:05:53.734 malloc 64 00:05:53.734 buf 0x2000004fff40 len 64 PASSED 00:05:53.734 malloc 4194304 00:05:53.734 register 0x200000800000 6291456 00:05:53.734 buf 0x200000a00000 len 4194304 PASSED 00:05:53.734 free 0x200000500000 3145728 00:05:53.734 free 0x2000004fff40 64 00:05:53.734 unregister 0x200000400000 4194304 PASSED 00:05:53.734 free 0x200000a00000 4194304 00:05:53.734 unregister 0x200000800000 6291456 PASSED 00:05:53.734 malloc 8388608 00:05:53.734 register 0x200000400000 10485760 00:05:53.734 buf 0x200000600000 len 8388608 PASSED 00:05:53.734 free 0x200000600000 8388608 00:05:53.734 unregister 0x200000400000 10485760 PASSED 00:05:53.734 passed 00:05:53.734 00:05:53.734 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.734 suites 1 1 n/a 0 0 00:05:53.734 tests 1 1 1 0 0 00:05:53.734 asserts 15 15 15 0 n/a 00:05:53.734 00:05:53.734 Elapsed time = 0.010 seconds 00:05:53.735 00:05:53.735 real 0m0.072s 00:05:53.735 user 0m0.024s 00:05:53.735 sys 0m0.048s 00:05:53.735 12:18:27 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.735 12:18:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:53.735 ************************************ 00:05:53.735 END TEST env_mem_callbacks 00:05:53.735 ************************************ 00:05:53.735 12:18:27 env -- common/autotest_common.sh@1142 -- # return 0 00:05:53.735 00:05:53.735 real 0m11.989s 00:05:53.735 user 0m5.080s 00:05:53.735 sys 0m0.936s 00:05:53.735 12:18:27 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.735 12:18:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.735 ************************************ 00:05:53.735 END TEST env 00:05:53.735 ************************************ 00:05:53.735 12:18:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.735 12:18:27 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:53.735 12:18:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.735 12:18:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.735 12:18:27 -- common/autotest_common.sh@10 -- # set +x 00:05:53.735 ************************************ 00:05:53.735 START TEST rpc 00:05:53.735 ************************************ 00:05:53.735 12:18:27 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:53.995 * Looking for test storage... 00:05:53.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:53.995 12:18:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=215620 00:05:53.995 12:18:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.995 12:18:27 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:53.995 12:18:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 215620 00:05:53.995 12:18:27 rpc -- common/autotest_common.sh@829 -- # '[' -z 215620 ']' 00:05:53.995 12:18:27 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.995 12:18:27 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.995 12:18:27 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.996 12:18:27 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.996 12:18:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.996 [2024-07-25 12:18:27.311117] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:05:53.996 [2024-07-25 12:18:27.311189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid215620 ] 00:05:53.996 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.996 [2024-07-25 12:18:27.398309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.255 [2024-07-25 12:18:27.465911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:54.255 [2024-07-25 12:18:27.465952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 215620' to capture a snapshot of events at runtime. 00:05:54.255 [2024-07-25 12:18:27.465959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.255 [2024-07-25 12:18:27.465965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.255 [2024-07-25 12:18:27.465970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid215620 for offline analysis/debug. 00:05:54.255 [2024-07-25 12:18:27.465992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.824 12:18:28 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.824 12:18:28 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:54.824 12:18:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:54.824 12:18:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:54.824 12:18:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:54.824 12:18:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:54.824 12:18:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.824 12:18:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.824 12:18:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.824 ************************************ 00:05:54.824 START TEST rpc_integrity 00:05:54.824 ************************************ 00:05:54.824 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:54.824 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.824 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.824 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.824 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.824 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.824 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.085 { 00:05:55.085 "name": "Malloc0", 00:05:55.085 "aliases": [ 00:05:55.085 "5dd1b092-f706-45b7-a6f2-9ca9ebaea977" 00:05:55.085 ], 00:05:55.085 "product_name": "Malloc disk", 00:05:55.085 "block_size": 512, 00:05:55.085 "num_blocks": 16384, 00:05:55.085 "uuid": "5dd1b092-f706-45b7-a6f2-9ca9ebaea977", 00:05:55.085 "assigned_rate_limits": { 00:05:55.085 "rw_ios_per_sec": 0, 00:05:55.085 "rw_mbytes_per_sec": 0, 00:05:55.085 "r_mbytes_per_sec": 0, 00:05:55.085 "w_mbytes_per_sec": 0 00:05:55.085 }, 00:05:55.085 "claimed": false, 00:05:55.085 "zoned": false, 00:05:55.085 "supported_io_types": { 00:05:55.085 "read": true, 00:05:55.085 "write": true, 00:05:55.085 "unmap": true, 00:05:55.085 "flush": true, 00:05:55.085 "reset": true, 00:05:55.085 "nvme_admin": false, 00:05:55.085 "nvme_io": false, 00:05:55.085 "nvme_io_md": false, 00:05:55.085 "write_zeroes": true, 00:05:55.085 "zcopy": true, 00:05:55.085 "get_zone_info": false, 00:05:55.085 "zone_management": false, 00:05:55.085 "zone_append": false, 00:05:55.085 "compare": false, 00:05:55.085 "compare_and_write": false, 00:05:55.085 "abort": true, 00:05:55.085 "seek_hole": false, 00:05:55.085 "seek_data": false, 00:05:55.085 "copy": true, 00:05:55.085 "nvme_iov_md": false 00:05:55.085 }, 00:05:55.085 "memory_domains": [ 00:05:55.085 { 00:05:55.085 "dma_device_id": "system", 00:05:55.085 "dma_device_type": 1 00:05:55.085 }, 00:05:55.085 { 00:05:55.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.085 "dma_device_type": 2 00:05:55.085 } 00:05:55.085 ], 00:05:55.085 "driver_specific": {} 00:05:55.085 } 00:05:55.085 ]' 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.085 [2024-07-25 12:18:28.332636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:55.085 [2024-07-25 12:18:28.332672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.085 [2024-07-25 12:18:28.332684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x147e610 00:05:55.085 [2024-07-25 12:18:28.332691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.085 [2024-07-25 12:18:28.333951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.085 [2024-07-25 12:18:28.333971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.085 Passthru0 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.085 { 00:05:55.085 "name": "Malloc0", 00:05:55.085 "aliases": [ 00:05:55.085 "5dd1b092-f706-45b7-a6f2-9ca9ebaea977" 00:05:55.085 ], 00:05:55.085 "product_name": "Malloc disk", 00:05:55.085 "block_size": 512, 00:05:55.085 "num_blocks": 16384, 00:05:55.085 "uuid": "5dd1b092-f706-45b7-a6f2-9ca9ebaea977", 00:05:55.085 "assigned_rate_limits": { 00:05:55.085 "rw_ios_per_sec": 0, 00:05:55.085 "rw_mbytes_per_sec": 0, 00:05:55.085 "r_mbytes_per_sec": 0, 00:05:55.085 "w_mbytes_per_sec": 0 00:05:55.085 }, 00:05:55.085 "claimed": true, 00:05:55.085 "claim_type": "exclusive_write", 00:05:55.085 "zoned": false, 00:05:55.085 "supported_io_types": { 00:05:55.085 "read": true, 00:05:55.085 "write": true, 00:05:55.085 "unmap": true, 00:05:55.085 "flush": true, 00:05:55.085 "reset": true, 00:05:55.085 "nvme_admin": false, 00:05:55.085 "nvme_io": false, 00:05:55.085 "nvme_io_md": false, 00:05:55.085 "write_zeroes": true, 00:05:55.085 "zcopy": true, 00:05:55.085 "get_zone_info": false, 00:05:55.085 "zone_management": false, 00:05:55.085 "zone_append": false, 00:05:55.085 "compare": false, 00:05:55.085 "compare_and_write": false, 00:05:55.085 "abort": true, 00:05:55.085 "seek_hole": false, 00:05:55.085 "seek_data": false, 00:05:55.085 "copy": true, 00:05:55.085 "nvme_iov_md": false 00:05:55.085 }, 00:05:55.085 "memory_domains": [ 00:05:55.085 { 00:05:55.085 "dma_device_id": "system", 00:05:55.085 "dma_device_type": 1 00:05:55.085 }, 00:05:55.085 { 00:05:55.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.085 "dma_device_type": 2 00:05:55.085 } 00:05:55.085 ], 00:05:55.085 "driver_specific": {} 00:05:55.085 }, 00:05:55.085 { 00:05:55.085 "name": "Passthru0", 00:05:55.085 "aliases": [ 00:05:55.085 "4a74118c-50e4-52e2-aef5-0f6e95d27679" 00:05:55.085 ], 00:05:55.085 "product_name": "passthru", 00:05:55.085 "block_size": 512, 00:05:55.085 "num_blocks": 16384, 00:05:55.085 "uuid": "4a74118c-50e4-52e2-aef5-0f6e95d27679", 00:05:55.085 "assigned_rate_limits": { 00:05:55.085 "rw_ios_per_sec": 0, 00:05:55.085 "rw_mbytes_per_sec": 0, 00:05:55.085 "r_mbytes_per_sec": 0, 00:05:55.085 "w_mbytes_per_sec": 0 00:05:55.085 }, 00:05:55.085 "claimed": false, 00:05:55.085 "zoned": false, 00:05:55.085 "supported_io_types": { 00:05:55.085 "read": true, 00:05:55.085 "write": true, 00:05:55.085 "unmap": true, 00:05:55.085 "flush": true, 00:05:55.085 "reset": true, 00:05:55.085 "nvme_admin": false, 00:05:55.085 "nvme_io": false, 00:05:55.085 "nvme_io_md": false, 00:05:55.085 "write_zeroes": true, 00:05:55.085 "zcopy": true, 00:05:55.085 "get_zone_info": false, 00:05:55.085 "zone_management": false, 00:05:55.085 "zone_append": false, 00:05:55.085 "compare": false, 00:05:55.085 "compare_and_write": false, 00:05:55.085 "abort": true, 00:05:55.085 "seek_hole": false, 00:05:55.085 "seek_data": false, 00:05:55.085 "copy": true, 00:05:55.085 "nvme_iov_md": false 00:05:55.085 }, 00:05:55.085 "memory_domains": [ 00:05:55.085 { 00:05:55.085 "dma_device_id": "system", 00:05:55.085 "dma_device_type": 1 00:05:55.085 }, 00:05:55.085 { 00:05:55.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.085 "dma_device_type": 2 00:05:55.085 } 00:05:55.085 ], 00:05:55.085 "driver_specific": { 00:05:55.085 "passthru": { 00:05:55.085 "name": "Passthru0", 00:05:55.085 "base_bdev_name": "Malloc0" 00:05:55.085 } 00:05:55.085 } 00:05:55.085 } 00:05:55.085 ]' 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.085 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:55.085 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.086 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.086 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.086 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.086 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.086 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.086 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.086 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:55.086 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:55.086 12:18:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:55.086 00:05:55.086 real 0m0.295s 00:05:55.086 user 0m0.195s 00:05:55.086 sys 0m0.032s 00:05:55.086 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.086 12:18:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.086 ************************************ 00:05:55.086 END TEST rpc_integrity 00:05:55.086 ************************************ 00:05:55.346 12:18:28 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.346 12:18:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:55.346 12:18:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.346 12:18:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.346 12:18:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.346 ************************************ 00:05:55.346 START TEST rpc_plugins 00:05:55.346 ************************************ 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:55.346 12:18:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.346 12:18:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:55.346 12:18:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.346 12:18:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:55.346 { 00:05:55.346 "name": "Malloc1", 00:05:55.346 "aliases": [ 00:05:55.346 "52ebd68e-e25f-4220-a295-88e725e18574" 00:05:55.346 ], 00:05:55.346 "product_name": "Malloc disk", 00:05:55.346 "block_size": 4096, 00:05:55.346 "num_blocks": 256, 00:05:55.346 "uuid": "52ebd68e-e25f-4220-a295-88e725e18574", 00:05:55.346 "assigned_rate_limits": { 00:05:55.346 "rw_ios_per_sec": 0, 00:05:55.346 "rw_mbytes_per_sec": 0, 00:05:55.346 "r_mbytes_per_sec": 0, 00:05:55.346 "w_mbytes_per_sec": 0 00:05:55.346 }, 00:05:55.346 "claimed": false, 00:05:55.346 "zoned": false, 00:05:55.346 "supported_io_types": { 00:05:55.346 "read": true, 00:05:55.346 "write": true, 00:05:55.346 "unmap": true, 00:05:55.346 "flush": true, 00:05:55.346 "reset": true, 00:05:55.346 "nvme_admin": false, 00:05:55.346 "nvme_io": false, 00:05:55.346 "nvme_io_md": false, 00:05:55.346 "write_zeroes": true, 00:05:55.346 "zcopy": true, 00:05:55.346 "get_zone_info": false, 00:05:55.346 "zone_management": false, 00:05:55.346 "zone_append": false, 00:05:55.346 "compare": false, 00:05:55.346 "compare_and_write": false, 00:05:55.346 "abort": true, 00:05:55.346 "seek_hole": false, 00:05:55.346 "seek_data": false, 00:05:55.346 "copy": true, 00:05:55.346 "nvme_iov_md": false 00:05:55.346 }, 00:05:55.346 "memory_domains": [ 00:05:55.346 { 00:05:55.346 "dma_device_id": "system", 00:05:55.346 "dma_device_type": 1 00:05:55.346 }, 00:05:55.346 { 00:05:55.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.346 "dma_device_type": 2 00:05:55.346 } 00:05:55.346 ], 00:05:55.346 "driver_specific": {} 00:05:55.346 } 00:05:55.346 ]' 00:05:55.346 12:18:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:55.346 12:18:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:55.346 12:18:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.346 12:18:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.346 12:18:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:55.346 12:18:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:55.346 12:18:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:55.346 00:05:55.346 real 0m0.149s 00:05:55.346 user 0m0.093s 00:05:55.346 sys 0m0.022s 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.346 12:18:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.346 ************************************ 00:05:55.346 END TEST rpc_plugins 00:05:55.346 ************************************ 00:05:55.346 12:18:28 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.346 12:18:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:55.346 12:18:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.346 12:18:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.346 12:18:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.606 ************************************ 00:05:55.606 START TEST rpc_trace_cmd_test 00:05:55.606 ************************************ 00:05:55.606 12:18:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:55.606 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:55.606 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:55.606 12:18:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.606 12:18:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.606 12:18:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.606 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:55.606 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid215620", 00:05:55.606 "tpoint_group_mask": "0x8", 00:05:55.606 "iscsi_conn": { 00:05:55.606 "mask": "0x2", 00:05:55.606 "tpoint_mask": "0x0" 00:05:55.606 }, 00:05:55.606 "scsi": { 00:05:55.606 "mask": "0x4", 00:05:55.606 "tpoint_mask": "0x0" 00:05:55.606 }, 00:05:55.606 "bdev": { 00:05:55.606 "mask": "0x8", 00:05:55.606 "tpoint_mask": "0xffffffffffffffff" 00:05:55.606 }, 00:05:55.606 "nvmf_rdma": { 00:05:55.606 "mask": "0x10", 00:05:55.606 "tpoint_mask": "0x0" 00:05:55.606 }, 00:05:55.606 "nvmf_tcp": { 00:05:55.606 "mask": "0x20", 00:05:55.606 "tpoint_mask": "0x0" 00:05:55.606 }, 00:05:55.606 "ftl": { 00:05:55.606 "mask": "0x40", 00:05:55.606 "tpoint_mask": "0x0" 00:05:55.606 }, 00:05:55.606 "blobfs": { 00:05:55.606 "mask": "0x80", 00:05:55.606 "tpoint_mask": "0x0" 00:05:55.606 }, 00:05:55.606 "dsa": { 00:05:55.606 "mask": "0x200", 00:05:55.606 "tpoint_mask": "0x0" 00:05:55.606 }, 00:05:55.606 "thread": { 00:05:55.606 "mask": "0x400", 00:05:55.606 "tpoint_mask": "0x0" 00:05:55.606 }, 00:05:55.606 "nvme_pcie": { 00:05:55.606 "mask": "0x800", 00:05:55.606 "tpoint_mask": "0x0" 00:05:55.606 }, 00:05:55.606 "iaa": { 00:05:55.606 "mask": "0x1000", 00:05:55.606 "tpoint_mask": "0x0" 00:05:55.607 }, 00:05:55.607 "nvme_tcp": { 00:05:55.607 "mask": "0x2000", 00:05:55.607 "tpoint_mask": "0x0" 00:05:55.607 }, 00:05:55.607 "bdev_nvme": { 00:05:55.607 "mask": "0x4000", 00:05:55.607 "tpoint_mask": "0x0" 00:05:55.607 }, 00:05:55.607 "sock": { 00:05:55.607 "mask": "0x8000", 00:05:55.607 "tpoint_mask": "0x0" 00:05:55.607 } 00:05:55.607 }' 00:05:55.607 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:55.607 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:55.607 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:55.607 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:55.607 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:55.607 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:55.607 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:55.607 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:55.607 12:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:55.867 12:18:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:55.867 00:05:55.867 real 0m0.247s 00:05:55.867 user 0m0.212s 00:05:55.867 sys 0m0.026s 00:05:55.867 12:18:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.867 12:18:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.867 ************************************ 00:05:55.867 END TEST rpc_trace_cmd_test 00:05:55.867 ************************************ 00:05:55.867 12:18:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.867 12:18:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:55.867 12:18:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:55.867 12:18:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:55.867 12:18:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.867 12:18:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.867 12:18:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.867 ************************************ 00:05:55.867 START TEST rpc_daemon_integrity 00:05:55.867 ************************************ 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.867 { 00:05:55.867 "name": "Malloc2", 00:05:55.867 "aliases": [ 00:05:55.867 "0e93d8b2-3131-4fe1-a80e-11c26b74ade5" 00:05:55.867 ], 00:05:55.867 "product_name": "Malloc disk", 00:05:55.867 "block_size": 512, 00:05:55.867 "num_blocks": 16384, 00:05:55.867 "uuid": "0e93d8b2-3131-4fe1-a80e-11c26b74ade5", 00:05:55.867 "assigned_rate_limits": { 00:05:55.867 "rw_ios_per_sec": 0, 00:05:55.867 "rw_mbytes_per_sec": 0, 00:05:55.867 "r_mbytes_per_sec": 0, 00:05:55.867 "w_mbytes_per_sec": 0 00:05:55.867 }, 00:05:55.867 "claimed": false, 00:05:55.867 "zoned": false, 00:05:55.867 "supported_io_types": { 00:05:55.867 "read": true, 00:05:55.867 "write": true, 00:05:55.867 "unmap": true, 00:05:55.867 "flush": true, 00:05:55.867 "reset": true, 00:05:55.867 "nvme_admin": false, 00:05:55.867 "nvme_io": false, 00:05:55.867 "nvme_io_md": false, 00:05:55.867 "write_zeroes": true, 00:05:55.867 "zcopy": true, 00:05:55.867 "get_zone_info": false, 00:05:55.867 "zone_management": false, 00:05:55.867 "zone_append": false, 00:05:55.867 "compare": false, 00:05:55.867 "compare_and_write": false, 00:05:55.867 "abort": true, 00:05:55.867 "seek_hole": false, 00:05:55.867 "seek_data": false, 00:05:55.867 "copy": true, 00:05:55.867 "nvme_iov_md": false 00:05:55.867 }, 00:05:55.867 "memory_domains": [ 00:05:55.867 { 00:05:55.867 "dma_device_id": "system", 00:05:55.867 "dma_device_type": 1 00:05:55.867 }, 00:05:55.867 { 00:05:55.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.867 "dma_device_type": 2 00:05:55.867 } 00:05:55.867 ], 00:05:55.867 "driver_specific": {} 00:05:55.867 } 00:05:55.867 ]' 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.867 [2024-07-25 12:18:29.243087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:55.867 [2024-07-25 12:18:29.243115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.867 [2024-07-25 12:18:29.243125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16179c0 00:05:55.867 [2024-07-25 12:18:29.243131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.867 [2024-07-25 12:18:29.244254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.867 [2024-07-25 12:18:29.244271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.867 Passthru0 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.867 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.867 { 00:05:55.867 "name": "Malloc2", 00:05:55.867 "aliases": [ 00:05:55.867 "0e93d8b2-3131-4fe1-a80e-11c26b74ade5" 00:05:55.867 ], 00:05:55.867 "product_name": "Malloc disk", 00:05:55.867 "block_size": 512, 00:05:55.867 "num_blocks": 16384, 00:05:55.867 "uuid": "0e93d8b2-3131-4fe1-a80e-11c26b74ade5", 00:05:55.867 "assigned_rate_limits": { 00:05:55.867 "rw_ios_per_sec": 0, 00:05:55.867 "rw_mbytes_per_sec": 0, 00:05:55.867 "r_mbytes_per_sec": 0, 00:05:55.867 "w_mbytes_per_sec": 0 00:05:55.867 }, 00:05:55.867 "claimed": true, 00:05:55.867 "claim_type": "exclusive_write", 00:05:55.867 "zoned": false, 00:05:55.867 "supported_io_types": { 00:05:55.867 "read": true, 00:05:55.867 "write": true, 00:05:55.867 "unmap": true, 00:05:55.868 "flush": true, 00:05:55.868 "reset": true, 00:05:55.868 "nvme_admin": false, 00:05:55.868 "nvme_io": false, 00:05:55.868 "nvme_io_md": false, 00:05:55.868 "write_zeroes": true, 00:05:55.868 "zcopy": true, 00:05:55.868 "get_zone_info": false, 00:05:55.868 "zone_management": false, 00:05:55.868 "zone_append": false, 00:05:55.868 "compare": false, 00:05:55.868 "compare_and_write": false, 00:05:55.868 "abort": true, 00:05:55.868 "seek_hole": false, 00:05:55.868 "seek_data": false, 00:05:55.868 "copy": true, 00:05:55.868 "nvme_iov_md": false 00:05:55.868 }, 00:05:55.868 "memory_domains": [ 00:05:55.868 { 00:05:55.868 "dma_device_id": "system", 00:05:55.868 "dma_device_type": 1 00:05:55.868 }, 00:05:55.868 { 00:05:55.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.868 "dma_device_type": 2 00:05:55.868 } 00:05:55.868 ], 00:05:55.868 "driver_specific": {} 00:05:55.868 }, 00:05:55.868 { 00:05:55.868 "name": "Passthru0", 00:05:55.868 "aliases": [ 00:05:55.868 "5b9fb6ec-57b4-53e4-aeb4-dae9ab2e7b3d" 00:05:55.868 ], 00:05:55.868 "product_name": "passthru", 00:05:55.868 "block_size": 512, 00:05:55.868 "num_blocks": 16384, 00:05:55.868 "uuid": "5b9fb6ec-57b4-53e4-aeb4-dae9ab2e7b3d", 00:05:55.868 "assigned_rate_limits": { 00:05:55.868 "rw_ios_per_sec": 0, 00:05:55.868 "rw_mbytes_per_sec": 0, 00:05:55.868 "r_mbytes_per_sec": 0, 00:05:55.868 "w_mbytes_per_sec": 0 00:05:55.868 }, 00:05:55.868 "claimed": false, 00:05:55.868 "zoned": false, 00:05:55.868 "supported_io_types": { 00:05:55.868 "read": true, 00:05:55.868 "write": true, 00:05:55.868 "unmap": true, 00:05:55.868 "flush": true, 00:05:55.868 "reset": true, 00:05:55.868 "nvme_admin": false, 00:05:55.868 "nvme_io": false, 00:05:55.868 "nvme_io_md": false, 00:05:55.868 "write_zeroes": true, 00:05:55.868 "zcopy": true, 00:05:55.868 "get_zone_info": false, 00:05:55.868 "zone_management": false, 00:05:55.868 "zone_append": false, 00:05:55.868 "compare": false, 00:05:55.868 "compare_and_write": false, 00:05:55.868 "abort": true, 00:05:55.868 "seek_hole": false, 00:05:55.868 "seek_data": false, 00:05:55.868 "copy": true, 00:05:55.868 "nvme_iov_md": false 00:05:55.868 }, 00:05:55.868 "memory_domains": [ 00:05:55.868 { 00:05:55.868 "dma_device_id": "system", 00:05:55.868 "dma_device_type": 1 00:05:55.868 }, 00:05:55.868 { 00:05:55.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.868 "dma_device_type": 2 00:05:55.868 } 00:05:55.868 ], 00:05:55.868 "driver_specific": { 00:05:55.868 "passthru": { 00:05:55.868 "name": "Passthru0", 00:05:55.868 "base_bdev_name": "Malloc2" 00:05:55.868 } 00:05:55.868 } 00:05:55.868 } 00:05:55.868 ]' 00:05:55.868 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.128 00:05:56.128 real 0m0.295s 00:05:56.128 user 0m0.178s 00:05:56.128 sys 0m0.054s 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.128 12:18:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.128 ************************************ 00:05:56.128 END TEST rpc_daemon_integrity 00:05:56.128 ************************************ 00:05:56.128 12:18:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.128 12:18:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:56.128 12:18:29 rpc -- rpc/rpc.sh@84 -- # killprocess 215620 00:05:56.128 12:18:29 rpc -- common/autotest_common.sh@948 -- # '[' -z 215620 ']' 00:05:56.128 12:18:29 rpc -- common/autotest_common.sh@952 -- # kill -0 215620 00:05:56.128 12:18:29 rpc -- common/autotest_common.sh@953 -- # uname 00:05:56.128 12:18:29 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.128 12:18:29 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 215620 00:05:56.128 12:18:29 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.128 12:18:29 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.128 12:18:29 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 215620' 00:05:56.128 killing process with pid 215620 00:05:56.128 12:18:29 rpc -- common/autotest_common.sh@967 -- # kill 215620 00:05:56.128 12:18:29 rpc -- common/autotest_common.sh@972 -- # wait 215620 00:05:56.389 00:05:56.389 real 0m2.543s 00:05:56.389 user 0m3.381s 00:05:56.389 sys 0m0.716s 00:05:56.389 12:18:29 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.389 12:18:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.389 ************************************ 00:05:56.389 END TEST rpc 00:05:56.389 ************************************ 00:05:56.389 12:18:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.389 12:18:29 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:56.389 12:18:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.389 12:18:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.389 12:18:29 -- common/autotest_common.sh@10 -- # set +x 00:05:56.389 ************************************ 00:05:56.389 START TEST skip_rpc 00:05:56.389 ************************************ 00:05:56.389 12:18:29 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:56.650 * Looking for test storage... 00:05:56.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.650 12:18:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.650 12:18:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:56.651 12:18:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:56.651 12:18:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.651 12:18:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.651 12:18:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.651 ************************************ 00:05:56.651 START TEST skip_rpc 00:05:56.651 ************************************ 00:05:56.651 12:18:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:56.651 12:18:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=216146 00:05:56.651 12:18:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.651 12:18:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:56.651 12:18:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:56.651 [2024-07-25 12:18:29.999185] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:05:56.651 [2024-07-25 12:18:29.999314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216146 ] 00:05:56.911 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.911 [2024-07-25 12:18:30.143780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.911 [2024-07-25 12:18:30.216564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 216146 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 216146 ']' 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 216146 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 216146 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 216146' 00:06:02.193 killing process with pid 216146 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 216146 00:06:02.193 12:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 216146 00:06:02.193 00:06:02.193 real 0m5.261s 00:06:02.193 user 0m5.002s 00:06:02.193 sys 0m0.287s 00:06:02.193 12:18:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.193 12:18:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.193 ************************************ 00:06:02.193 END TEST skip_rpc 00:06:02.193 ************************************ 00:06:02.193 12:18:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:02.193 12:18:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:02.193 12:18:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.193 12:18:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.193 12:18:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.193 ************************************ 00:06:02.193 START TEST skip_rpc_with_json 00:06:02.193 ************************************ 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=217091 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 217091 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 217091 ']' 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.193 12:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.193 [2024-07-25 12:18:35.284476] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:02.193 [2024-07-25 12:18:35.284525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid217091 ] 00:06:02.193 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.193 [2024-07-25 12:18:35.368502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.193 [2024-07-25 12:18:35.434247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.134 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.134 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:03.134 12:18:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:03.134 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.134 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.134 [2024-07-25 12:18:36.470469] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:03.134 request: 00:06:03.134 { 00:06:03.134 "trtype": "tcp", 00:06:03.134 "method": "nvmf_get_transports", 00:06:03.134 "req_id": 1 00:06:03.134 } 00:06:03.134 Got JSON-RPC error response 00:06:03.134 response: 00:06:03.134 { 00:06:03.135 "code": -19, 00:06:03.135 "message": "No such device" 00:06:03.135 } 00:06:03.135 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:03.135 12:18:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:03.135 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.135 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.135 [2024-07-25 12:18:36.482591] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.135 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.135 12:18:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:03.135 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.135 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.395 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.395 12:18:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:03.395 { 00:06:03.395 "subsystems": [ 00:06:03.395 { 00:06:03.395 "subsystem": "vfio_user_target", 00:06:03.395 "config": null 00:06:03.395 }, 00:06:03.395 { 00:06:03.395 "subsystem": "keyring", 00:06:03.395 "config": [] 00:06:03.395 }, 00:06:03.395 { 00:06:03.395 "subsystem": "iobuf", 00:06:03.395 "config": [ 00:06:03.395 { 00:06:03.395 "method": "iobuf_set_options", 00:06:03.395 "params": { 00:06:03.395 "small_pool_count": 8192, 00:06:03.395 "large_pool_count": 1024, 00:06:03.395 "small_bufsize": 8192, 00:06:03.395 "large_bufsize": 135168 00:06:03.395 } 00:06:03.395 } 00:06:03.395 ] 00:06:03.395 }, 00:06:03.395 { 00:06:03.395 "subsystem": "sock", 00:06:03.395 "config": [ 00:06:03.395 { 00:06:03.395 "method": "sock_set_default_impl", 00:06:03.395 "params": { 00:06:03.395 "impl_name": "posix" 00:06:03.395 } 00:06:03.395 }, 00:06:03.395 { 00:06:03.395 "method": "sock_impl_set_options", 00:06:03.395 "params": { 00:06:03.395 "impl_name": "ssl", 00:06:03.395 "recv_buf_size": 4096, 00:06:03.395 "send_buf_size": 4096, 00:06:03.395 "enable_recv_pipe": true, 00:06:03.395 "enable_quickack": false, 00:06:03.395 "enable_placement_id": 0, 00:06:03.395 "enable_zerocopy_send_server": true, 00:06:03.395 "enable_zerocopy_send_client": false, 00:06:03.395 "zerocopy_threshold": 0, 00:06:03.395 "tls_version": 0, 00:06:03.395 "enable_ktls": false 00:06:03.395 } 00:06:03.395 }, 00:06:03.395 { 00:06:03.395 "method": "sock_impl_set_options", 00:06:03.395 "params": { 00:06:03.395 "impl_name": "posix", 00:06:03.395 "recv_buf_size": 2097152, 00:06:03.395 "send_buf_size": 2097152, 00:06:03.395 "enable_recv_pipe": true, 00:06:03.395 "enable_quickack": false, 00:06:03.395 "enable_placement_id": 0, 00:06:03.395 "enable_zerocopy_send_server": true, 00:06:03.395 "enable_zerocopy_send_client": false, 00:06:03.395 "zerocopy_threshold": 0, 00:06:03.395 "tls_version": 0, 00:06:03.395 "enable_ktls": false 00:06:03.395 } 00:06:03.395 } 00:06:03.395 ] 00:06:03.395 }, 00:06:03.395 { 00:06:03.395 "subsystem": "vmd", 00:06:03.395 "config": [] 00:06:03.395 }, 00:06:03.395 { 00:06:03.395 "subsystem": "accel", 00:06:03.395 "config": [ 00:06:03.395 { 00:06:03.395 "method": "accel_set_options", 00:06:03.395 "params": { 00:06:03.395 "small_cache_size": 128, 00:06:03.395 "large_cache_size": 16, 00:06:03.395 "task_count": 2048, 00:06:03.395 "sequence_count": 2048, 00:06:03.395 "buf_count": 2048 00:06:03.395 } 00:06:03.395 } 00:06:03.395 ] 00:06:03.395 }, 00:06:03.395 { 00:06:03.395 "subsystem": "bdev", 00:06:03.395 "config": [ 00:06:03.395 { 00:06:03.395 "method": "bdev_set_options", 00:06:03.395 "params": { 00:06:03.395 "bdev_io_pool_size": 65535, 00:06:03.395 "bdev_io_cache_size": 256, 00:06:03.395 "bdev_auto_examine": true, 00:06:03.395 "iobuf_small_cache_size": 128, 00:06:03.395 "iobuf_large_cache_size": 16 00:06:03.395 } 00:06:03.395 }, 00:06:03.395 { 00:06:03.395 "method": "bdev_raid_set_options", 00:06:03.395 "params": { 00:06:03.395 "process_window_size_kb": 1024, 00:06:03.395 "process_max_bandwidth_mb_sec": 0 00:06:03.395 } 00:06:03.395 }, 00:06:03.395 { 00:06:03.395 "method": "bdev_iscsi_set_options", 00:06:03.395 "params": { 00:06:03.395 "timeout_sec": 30 00:06:03.395 } 00:06:03.395 }, 00:06:03.395 { 00:06:03.395 "method": "bdev_nvme_set_options", 00:06:03.395 "params": { 00:06:03.395 "action_on_timeout": "none", 00:06:03.395 "timeout_us": 0, 00:06:03.395 "timeout_admin_us": 0, 00:06:03.396 "keep_alive_timeout_ms": 10000, 00:06:03.396 "arbitration_burst": 0, 00:06:03.396 "low_priority_weight": 0, 00:06:03.396 "medium_priority_weight": 0, 00:06:03.396 "high_priority_weight": 0, 00:06:03.396 "nvme_adminq_poll_period_us": 10000, 00:06:03.396 "nvme_ioq_poll_period_us": 0, 00:06:03.396 "io_queue_requests": 0, 00:06:03.396 "delay_cmd_submit": true, 00:06:03.396 "transport_retry_count": 4, 00:06:03.396 "bdev_retry_count": 3, 00:06:03.396 "transport_ack_timeout": 0, 00:06:03.396 "ctrlr_loss_timeout_sec": 0, 00:06:03.396 "reconnect_delay_sec": 0, 00:06:03.396 "fast_io_fail_timeout_sec": 0, 00:06:03.396 "disable_auto_failback": false, 00:06:03.396 "generate_uuids": false, 00:06:03.396 "transport_tos": 0, 00:06:03.396 "nvme_error_stat": false, 00:06:03.396 "rdma_srq_size": 0, 00:06:03.396 "io_path_stat": false, 00:06:03.396 "allow_accel_sequence": false, 00:06:03.396 "rdma_max_cq_size": 0, 00:06:03.396 "rdma_cm_event_timeout_ms": 0, 00:06:03.396 "dhchap_digests": [ 00:06:03.396 "sha256", 00:06:03.396 "sha384", 00:06:03.396 "sha512" 00:06:03.396 ], 00:06:03.396 "dhchap_dhgroups": [ 00:06:03.396 "null", 00:06:03.396 "ffdhe2048", 00:06:03.396 "ffdhe3072", 00:06:03.396 "ffdhe4096", 00:06:03.396 "ffdhe6144", 00:06:03.396 "ffdhe8192" 00:06:03.396 ] 00:06:03.396 } 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "method": "bdev_nvme_set_hotplug", 00:06:03.396 "params": { 00:06:03.396 "period_us": 100000, 00:06:03.396 "enable": false 00:06:03.396 } 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "method": "bdev_wait_for_examine" 00:06:03.396 } 00:06:03.396 ] 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "subsystem": "scsi", 00:06:03.396 "config": null 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "subsystem": "scheduler", 00:06:03.396 "config": [ 00:06:03.396 { 00:06:03.396 "method": "framework_set_scheduler", 00:06:03.396 "params": { 00:06:03.396 "name": "static" 00:06:03.396 } 00:06:03.396 } 00:06:03.396 ] 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "subsystem": "vhost_scsi", 00:06:03.396 "config": [] 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "subsystem": "vhost_blk", 00:06:03.396 "config": [] 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "subsystem": "ublk", 00:06:03.396 "config": [] 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "subsystem": "nbd", 00:06:03.396 "config": [] 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "subsystem": "nvmf", 00:06:03.396 "config": [ 00:06:03.396 { 00:06:03.396 "method": "nvmf_set_config", 00:06:03.396 "params": { 00:06:03.396 "discovery_filter": "match_any", 00:06:03.396 "admin_cmd_passthru": { 00:06:03.396 "identify_ctrlr": false 00:06:03.396 } 00:06:03.396 } 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "method": "nvmf_set_max_subsystems", 00:06:03.396 "params": { 00:06:03.396 "max_subsystems": 1024 00:06:03.396 } 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "method": "nvmf_set_crdt", 00:06:03.396 "params": { 00:06:03.396 "crdt1": 0, 00:06:03.396 "crdt2": 0, 00:06:03.396 "crdt3": 0 00:06:03.396 } 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "method": "nvmf_create_transport", 00:06:03.396 "params": { 00:06:03.396 "trtype": "TCP", 00:06:03.396 "max_queue_depth": 128, 00:06:03.396 "max_io_qpairs_per_ctrlr": 127, 00:06:03.396 "in_capsule_data_size": 4096, 00:06:03.396 "max_io_size": 131072, 00:06:03.396 "io_unit_size": 131072, 00:06:03.396 "max_aq_depth": 128, 00:06:03.396 "num_shared_buffers": 511, 00:06:03.396 "buf_cache_size": 4294967295, 00:06:03.396 "dif_insert_or_strip": false, 00:06:03.396 "zcopy": false, 00:06:03.396 "c2h_success": true, 00:06:03.396 "sock_priority": 0, 00:06:03.396 "abort_timeout_sec": 1, 00:06:03.396 "ack_timeout": 0, 00:06:03.396 "data_wr_pool_size": 0 00:06:03.396 } 00:06:03.396 } 00:06:03.396 ] 00:06:03.396 }, 00:06:03.396 { 00:06:03.396 "subsystem": "iscsi", 00:06:03.396 "config": [ 00:06:03.396 { 00:06:03.396 "method": "iscsi_set_options", 00:06:03.396 "params": { 00:06:03.396 "node_base": "iqn.2016-06.io.spdk", 00:06:03.396 "max_sessions": 128, 00:06:03.396 "max_connections_per_session": 2, 00:06:03.396 "max_queue_depth": 64, 00:06:03.396 "default_time2wait": 2, 00:06:03.396 "default_time2retain": 20, 00:06:03.396 "first_burst_length": 8192, 00:06:03.396 "immediate_data": true, 00:06:03.396 "allow_duplicated_isid": false, 00:06:03.396 "error_recovery_level": 0, 00:06:03.396 "nop_timeout": 60, 00:06:03.396 "nop_in_interval": 30, 00:06:03.396 "disable_chap": false, 00:06:03.396 "require_chap": false, 00:06:03.396 "mutual_chap": false, 00:06:03.396 "chap_group": 0, 00:06:03.396 "max_large_datain_per_connection": 64, 00:06:03.396 "max_r2t_per_connection": 4, 00:06:03.396 "pdu_pool_size": 36864, 00:06:03.396 "immediate_data_pool_size": 16384, 00:06:03.396 "data_out_pool_size": 2048 00:06:03.396 } 00:06:03.396 } 00:06:03.396 ] 00:06:03.396 } 00:06:03.396 ] 00:06:03.396 } 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 217091 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 217091 ']' 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 217091 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 217091 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 217091' 00:06:03.396 killing process with pid 217091 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 217091 00:06:03.396 12:18:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 217091 00:06:03.657 12:18:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=217404 00:06:03.657 12:18:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:03.657 12:18:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:09.009 12:18:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 217404 00:06:09.009 12:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 217404 ']' 00:06:09.009 12:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 217404 00:06:09.009 12:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:09.009 12:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.009 12:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 217404 00:06:09.009 12:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.009 12:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.009 12:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 217404' 00:06:09.009 killing process with pid 217404 00:06:09.009 12:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 217404 00:06:09.009 12:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 217404 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:09.009 00:06:09.009 real 0m6.922s 00:06:09.009 user 0m7.090s 00:06:09.009 sys 0m0.644s 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.009 ************************************ 00:06:09.009 END TEST skip_rpc_with_json 00:06:09.009 ************************************ 00:06:09.009 12:18:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:09.009 12:18:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:09.009 12:18:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.009 12:18:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.009 12:18:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.009 ************************************ 00:06:09.009 START TEST skip_rpc_with_delay 00:06:09.009 ************************************ 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.009 [2024-07-25 12:18:42.285956] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:09.009 [2024-07-25 12:18:42.286028] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.009 00:06:09.009 real 0m0.078s 00:06:09.009 user 0m0.053s 00:06:09.009 sys 0m0.024s 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.009 12:18:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:09.009 ************************************ 00:06:09.009 END TEST skip_rpc_with_delay 00:06:09.009 ************************************ 00:06:09.009 12:18:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:09.009 12:18:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:09.009 12:18:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:09.009 12:18:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:09.009 12:18:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.009 12:18:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.009 12:18:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.009 ************************************ 00:06:09.009 START TEST exit_on_failed_rpc_init 00:06:09.009 ************************************ 00:06:09.009 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:09.009 12:18:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=218371 00:06:09.009 12:18:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 218371 00:06:09.009 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 218371 ']' 00:06:09.009 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.009 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.009 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.009 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.009 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:09.009 12:18:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.275 [2024-07-25 12:18:42.436408] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:09.275 [2024-07-25 12:18:42.436462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218371 ] 00:06:09.275 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.275 [2024-07-25 12:18:42.499838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.275 [2024-07-25 12:18:42.564798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:09.536 12:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.797 [2024-07-25 12:18:42.992175] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:09.797 [2024-07-25 12:18:42.992226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218405 ] 00:06:09.797 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.797 [2024-07-25 12:18:43.068485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.797 [2024-07-25 12:18:43.146703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.797 [2024-07-25 12:18:43.146787] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:09.797 [2024-07-25 12:18:43.146805] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:09.797 [2024-07-25 12:18:43.146818] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 218371 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 218371 ']' 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 218371 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 218371 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 218371' 00:06:10.058 killing process with pid 218371 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 218371 00:06:10.058 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 218371 00:06:10.317 00:06:10.317 real 0m1.109s 00:06:10.317 user 0m1.624s 00:06:10.317 sys 0m0.366s 00:06:10.317 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.317 12:18:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.317 ************************************ 00:06:10.317 END TEST exit_on_failed_rpc_init 00:06:10.317 ************************************ 00:06:10.317 12:18:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:10.317 12:18:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:10.317 00:06:10.317 real 0m13.755s 00:06:10.317 user 0m13.916s 00:06:10.317 sys 0m1.584s 00:06:10.317 12:18:43 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.317 12:18:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.317 ************************************ 00:06:10.317 END TEST skip_rpc 00:06:10.317 ************************************ 00:06:10.317 12:18:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.317 12:18:43 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:10.317 12:18:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.317 12:18:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.317 12:18:43 -- common/autotest_common.sh@10 -- # set +x 00:06:10.317 ************************************ 00:06:10.317 START TEST rpc_client 00:06:10.317 ************************************ 00:06:10.317 12:18:43 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:10.317 * Looking for test storage... 00:06:10.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:10.317 12:18:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:10.317 OK 00:06:10.317 12:18:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:10.317 00:06:10.317 real 0m0.121s 00:06:10.317 user 0m0.056s 00:06:10.317 sys 0m0.073s 00:06:10.317 12:18:43 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.317 12:18:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:10.317 ************************************ 00:06:10.317 END TEST rpc_client 00:06:10.317 ************************************ 00:06:10.577 12:18:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.577 12:18:43 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:10.577 12:18:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.577 12:18:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.577 12:18:43 -- common/autotest_common.sh@10 -- # set +x 00:06:10.577 ************************************ 00:06:10.577 START TEST json_config 00:06:10.577 ************************************ 00:06:10.577 12:18:43 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:10.577 12:18:43 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.577 12:18:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:10.577 12:18:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.577 12:18:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.577 12:18:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.577 12:18:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.577 12:18:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.577 12:18:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.577 12:18:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.577 12:18:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.577 12:18:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.577 12:18:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.578 12:18:43 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.578 12:18:43 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.578 12:18:43 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.578 12:18:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.578 12:18:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.578 12:18:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.578 12:18:43 json_config -- paths/export.sh@5 -- # export PATH 00:06:10.578 12:18:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@47 -- # : 0 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:10.578 12:18:43 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:10.578 INFO: JSON configuration test init 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:10.578 12:18:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.578 12:18:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:10.578 12:18:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.578 12:18:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.578 12:18:43 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:10.578 12:18:43 json_config -- json_config/common.sh@9 -- # local app=target 00:06:10.578 12:18:43 json_config -- json_config/common.sh@10 -- # shift 00:06:10.578 12:18:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.578 12:18:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.578 12:18:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.578 12:18:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.578 12:18:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.578 12:18:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=218795 00:06:10.578 12:18:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.578 Waiting for target to run... 00:06:10.578 12:18:43 json_config -- json_config/common.sh@25 -- # waitforlisten 218795 /var/tmp/spdk_tgt.sock 00:06:10.578 12:18:43 json_config -- common/autotest_common.sh@829 -- # '[' -z 218795 ']' 00:06:10.578 12:18:43 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.578 12:18:43 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.578 12:18:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:10.578 12:18:43 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.578 12:18:43 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.578 12:18:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.578 [2024-07-25 12:18:43.983125] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:10.578 [2024-07-25 12:18:43.983195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218795 ] 00:06:10.839 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.099 [2024-07-25 12:18:44.334129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.099 [2024-07-25 12:18:44.397889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.669 12:18:44 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.669 12:18:44 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:11.669 12:18:44 json_config -- json_config/common.sh@26 -- # echo '' 00:06:11.669 00:06:11.669 12:18:44 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:11.669 12:18:44 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:11.669 12:18:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.669 12:18:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.669 12:18:44 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:11.669 12:18:44 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:11.669 12:18:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.669 12:18:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.669 12:18:44 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:11.669 12:18:44 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:11.669 12:18:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:14.967 12:18:47 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:14.967 12:18:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:14.967 12:18:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.967 12:18:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.967 12:18:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:14.967 12:18:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:14.967 12:18:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:14.967 12:18:47 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:14.967 12:18:47 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:14.967 12:18:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@51 -- # sort 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:14.967 12:18:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.967 12:18:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:14.967 12:18:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.967 12:18:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:14.967 12:18:48 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:14.967 12:18:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:15.228 MallocForNvmf0 00:06:15.228 12:18:48 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:15.228 12:18:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:15.228 MallocForNvmf1 00:06:15.228 12:18:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:15.228 12:18:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:15.487 [2024-07-25 12:18:48.812803] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.487 12:18:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.487 12:18:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.746 12:18:49 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:15.746 12:18:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:16.005 12:18:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:16.005 12:18:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:16.005 12:18:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:16.005 12:18:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:16.264 [2024-07-25 12:18:49.583299] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:16.264 12:18:49 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:16.264 12:18:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.264 12:18:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.264 12:18:49 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:16.264 12:18:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.264 12:18:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.264 12:18:49 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:16.264 12:18:49 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:16.264 12:18:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:16.526 MallocBdevForConfigChangeCheck 00:06:16.526 12:18:49 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:16.526 12:18:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.526 12:18:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.526 12:18:49 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:16.526 12:18:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.096 12:18:50 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:17.096 INFO: shutting down applications... 00:06:17.096 12:18:50 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:17.096 12:18:50 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:17.096 12:18:50 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:17.096 12:18:50 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:19.633 Calling clear_iscsi_subsystem 00:06:19.633 Calling clear_nvmf_subsystem 00:06:19.633 Calling clear_nbd_subsystem 00:06:19.633 Calling clear_ublk_subsystem 00:06:19.633 Calling clear_vhost_blk_subsystem 00:06:19.633 Calling clear_vhost_scsi_subsystem 00:06:19.633 Calling clear_bdev_subsystem 00:06:19.633 12:18:52 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:19.633 12:18:52 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:19.633 12:18:52 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:19.633 12:18:52 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.633 12:18:52 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:19.633 12:18:52 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:19.893 12:18:53 json_config -- json_config/json_config.sh@349 -- # break 00:06:19.893 12:18:53 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:19.893 12:18:53 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:19.893 12:18:53 json_config -- json_config/common.sh@31 -- # local app=target 00:06:19.893 12:18:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.893 12:18:53 json_config -- json_config/common.sh@35 -- # [[ -n 218795 ]] 00:06:19.893 12:18:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 218795 00:06:19.893 12:18:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.893 12:18:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.893 12:18:53 json_config -- json_config/common.sh@41 -- # kill -0 218795 00:06:19.893 12:18:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.460 12:18:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.460 12:18:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.460 12:18:53 json_config -- json_config/common.sh@41 -- # kill -0 218795 00:06:20.460 12:18:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:20.460 12:18:53 json_config -- json_config/common.sh@43 -- # break 00:06:20.460 12:18:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:20.460 12:18:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:20.460 SPDK target shutdown done 00:06:20.460 12:18:53 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:20.460 INFO: relaunching applications... 00:06:20.460 12:18:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:20.460 12:18:53 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.460 12:18:53 json_config -- json_config/common.sh@10 -- # shift 00:06:20.460 12:18:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.460 12:18:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.460 12:18:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.460 12:18:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.460 12:18:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.460 12:18:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=220459 00:06:20.460 12:18:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.460 Waiting for target to run... 00:06:20.460 12:18:53 json_config -- json_config/common.sh@25 -- # waitforlisten 220459 /var/tmp/spdk_tgt.sock 00:06:20.460 12:18:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:20.460 12:18:53 json_config -- common/autotest_common.sh@829 -- # '[' -z 220459 ']' 00:06:20.460 12:18:53 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.460 12:18:53 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.460 12:18:53 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.460 12:18:53 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.460 12:18:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.460 [2024-07-25 12:18:53.714224] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:20.460 [2024-07-25 12:18:53.714279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220459 ] 00:06:20.460 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.720 [2024-07-25 12:18:53.997961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.720 [2024-07-25 12:18:54.047842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.017 [2024-07-25 12:18:57.070521] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.017 [2024-07-25 12:18:57.102997] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:24.017 12:18:57 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.017 12:18:57 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:24.017 12:18:57 json_config -- json_config/common.sh@26 -- # echo '' 00:06:24.017 00:06:24.017 12:18:57 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:24.017 12:18:57 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:24.017 INFO: Checking if target configuration is the same... 00:06:24.017 12:18:57 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.017 12:18:57 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:24.017 12:18:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.017 + '[' 2 -ne 2 ']' 00:06:24.017 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:24.017 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:24.017 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:24.017 +++ basename /dev/fd/62 00:06:24.017 ++ mktemp /tmp/62.XXX 00:06:24.017 + tmp_file_1=/tmp/62.6z9 00:06:24.017 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.017 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:24.017 + tmp_file_2=/tmp/spdk_tgt_config.json.SCD 00:06:24.017 + ret=0 00:06:24.017 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.278 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.278 + diff -u /tmp/62.6z9 /tmp/spdk_tgt_config.json.SCD 00:06:24.278 + echo 'INFO: JSON config files are the same' 00:06:24.278 INFO: JSON config files are the same 00:06:24.278 + rm /tmp/62.6z9 /tmp/spdk_tgt_config.json.SCD 00:06:24.278 + exit 0 00:06:24.278 12:18:57 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:24.278 12:18:57 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:24.278 INFO: changing configuration and checking if this can be detected... 00:06:24.278 12:18:57 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:24.278 12:18:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:24.539 12:18:57 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.539 12:18:57 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:24.539 12:18:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.539 + '[' 2 -ne 2 ']' 00:06:24.539 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:24.539 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:24.539 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:24.539 +++ basename /dev/fd/62 00:06:24.539 ++ mktemp /tmp/62.XXX 00:06:24.539 + tmp_file_1=/tmp/62.EUF 00:06:24.539 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.539 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:24.539 + tmp_file_2=/tmp/spdk_tgt_config.json.Jyj 00:06:24.539 + ret=0 00:06:24.539 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.798 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.798 + diff -u /tmp/62.EUF /tmp/spdk_tgt_config.json.Jyj 00:06:24.798 + ret=1 00:06:24.798 + echo '=== Start of file: /tmp/62.EUF ===' 00:06:24.798 + cat /tmp/62.EUF 00:06:24.798 + echo '=== End of file: /tmp/62.EUF ===' 00:06:24.798 + echo '' 00:06:24.798 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Jyj ===' 00:06:24.798 + cat /tmp/spdk_tgt_config.json.Jyj 00:06:24.798 + echo '=== End of file: /tmp/spdk_tgt_config.json.Jyj ===' 00:06:24.798 + echo '' 00:06:24.798 + rm /tmp/62.EUF /tmp/spdk_tgt_config.json.Jyj 00:06:24.798 + exit 1 00:06:24.798 12:18:58 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:24.798 INFO: configuration change detected. 00:06:24.798 12:18:58 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:24.798 12:18:58 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:24.798 12:18:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.798 12:18:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.798 12:18:58 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:24.798 12:18:58 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:24.798 12:18:58 json_config -- json_config/json_config.sh@321 -- # [[ -n 220459 ]] 00:06:24.798 12:18:58 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:24.798 12:18:58 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:24.798 12:18:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.798 12:18:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.798 12:18:58 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:24.798 12:18:58 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:24.798 12:18:58 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:24.799 12:18:58 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:24.799 12:18:58 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:24.799 12:18:58 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:24.799 12:18:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.799 12:18:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.799 12:18:58 json_config -- json_config/json_config.sh@327 -- # killprocess 220459 00:06:24.799 12:18:58 json_config -- common/autotest_common.sh@948 -- # '[' -z 220459 ']' 00:06:24.799 12:18:58 json_config -- common/autotest_common.sh@952 -- # kill -0 220459 00:06:24.799 12:18:58 json_config -- common/autotest_common.sh@953 -- # uname 00:06:24.799 12:18:58 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.799 12:18:58 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 220459 00:06:25.058 12:18:58 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.058 12:18:58 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.058 12:18:58 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 220459' 00:06:25.058 killing process with pid 220459 00:06:25.058 12:18:58 json_config -- common/autotest_common.sh@967 -- # kill 220459 00:06:25.058 12:18:58 json_config -- common/autotest_common.sh@972 -- # wait 220459 00:06:27.600 12:19:00 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:27.600 12:19:00 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:27.600 12:19:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.600 12:19:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.601 12:19:00 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:27.601 12:19:00 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:27.601 INFO: Success 00:06:27.601 00:06:27.601 real 0m16.789s 00:06:27.601 user 0m17.895s 00:06:27.601 sys 0m1.941s 00:06:27.601 12:19:00 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.601 12:19:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.601 ************************************ 00:06:27.601 END TEST json_config 00:06:27.601 ************************************ 00:06:27.601 12:19:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.601 12:19:00 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:27.601 12:19:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.601 12:19:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.601 12:19:00 -- common/autotest_common.sh@10 -- # set +x 00:06:27.601 ************************************ 00:06:27.601 START TEST json_config_extra_key 00:06:27.601 ************************************ 00:06:27.601 12:19:00 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.601 12:19:00 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.601 12:19:00 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.601 12:19:00 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.601 12:19:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.601 12:19:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.601 12:19:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.601 12:19:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:27.601 12:19:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.601 12:19:00 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:27.601 INFO: launching applications... 00:06:27.601 12:19:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:27.601 12:19:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:27.601 12:19:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:27.601 12:19:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:27.601 12:19:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:27.601 12:19:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:27.601 12:19:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.601 12:19:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.601 12:19:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=221790 00:06:27.601 12:19:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:27.601 Waiting for target to run... 00:06:27.601 12:19:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 221790 /var/tmp/spdk_tgt.sock 00:06:27.601 12:19:00 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 221790 ']' 00:06:27.601 12:19:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:27.601 12:19:00 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:27.601 12:19:00 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.601 12:19:00 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:27.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:27.601 12:19:00 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.601 12:19:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.601 [2024-07-25 12:19:00.819381] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:27.601 [2024-07-25 12:19:00.819453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221790 ] 00:06:27.601 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.862 [2024-07-25 12:19:01.084492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.862 [2024-07-25 12:19:01.136932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.431 12:19:01 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.431 12:19:01 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:28.431 12:19:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:28.431 00:06:28.431 12:19:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:28.431 INFO: shutting down applications... 00:06:28.431 12:19:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:28.431 12:19:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:28.431 12:19:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:28.431 12:19:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 221790 ]] 00:06:28.431 12:19:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 221790 00:06:28.431 12:19:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:28.431 12:19:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.431 12:19:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 221790 00:06:28.431 12:19:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.000 12:19:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.000 12:19:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.000 12:19:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 221790 00:06:29.000 12:19:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:29.000 12:19:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:29.000 12:19:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:29.000 12:19:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:29.000 SPDK target shutdown done 00:06:29.000 12:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:29.000 Success 00:06:29.000 00:06:29.000 real 0m1.511s 00:06:29.000 user 0m1.211s 00:06:29.000 sys 0m0.382s 00:06:29.000 12:19:02 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.000 12:19:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:29.000 ************************************ 00:06:29.000 END TEST json_config_extra_key 00:06:29.000 ************************************ 00:06:29.000 12:19:02 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.000 12:19:02 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:29.000 12:19:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.000 12:19:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.000 12:19:02 -- common/autotest_common.sh@10 -- # set +x 00:06:29.000 ************************************ 00:06:29.000 START TEST alias_rpc 00:06:29.000 ************************************ 00:06:29.000 12:19:02 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:29.000 * Looking for test storage... 00:06:29.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:29.000 12:19:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:29.000 12:19:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=222146 00:06:29.000 12:19:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 222146 00:06:29.000 12:19:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.000 12:19:02 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 222146 ']' 00:06:29.000 12:19:02 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.000 12:19:02 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.000 12:19:02 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.000 12:19:02 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.000 12:19:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.000 [2024-07-25 12:19:02.394408] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:29.000 [2024-07-25 12:19:02.394465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222146 ] 00:06:29.260 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.260 [2024-07-25 12:19:02.476372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.260 [2024-07-25 12:19:02.541141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.829 12:19:03 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.829 12:19:03 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:29.829 12:19:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:30.088 12:19:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 222146 00:06:30.088 12:19:03 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 222146 ']' 00:06:30.088 12:19:03 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 222146 00:06:30.088 12:19:03 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:30.088 12:19:03 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.088 12:19:03 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 222146 00:06:30.347 12:19:03 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.347 12:19:03 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.347 12:19:03 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 222146' 00:06:30.347 killing process with pid 222146 00:06:30.347 12:19:03 alias_rpc -- common/autotest_common.sh@967 -- # kill 222146 00:06:30.347 12:19:03 alias_rpc -- common/autotest_common.sh@972 -- # wait 222146 00:06:30.347 00:06:30.347 real 0m1.468s 00:06:30.347 user 0m1.716s 00:06:30.347 sys 0m0.375s 00:06:30.347 12:19:03 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.347 12:19:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.347 ************************************ 00:06:30.347 END TEST alias_rpc 00:06:30.347 ************************************ 00:06:30.347 12:19:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:30.347 12:19:03 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:30.347 12:19:03 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:30.347 12:19:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.347 12:19:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.347 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:06:30.606 ************************************ 00:06:30.606 START TEST spdkcli_tcp 00:06:30.606 ************************************ 00:06:30.606 12:19:03 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:30.606 * Looking for test storage... 00:06:30.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:30.606 12:19:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:30.606 12:19:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:30.606 12:19:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:30.606 12:19:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:30.606 12:19:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:30.606 12:19:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:30.606 12:19:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:30.606 12:19:03 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.606 12:19:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.606 12:19:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=222497 00:06:30.606 12:19:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 222497 00:06:30.606 12:19:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:30.606 12:19:03 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 222497 ']' 00:06:30.606 12:19:03 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.606 12:19:03 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.606 12:19:03 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.606 12:19:03 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.606 12:19:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.606 [2024-07-25 12:19:03.949052] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:30.606 [2024-07-25 12:19:03.949102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222497 ] 00:06:30.606 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.866 [2024-07-25 12:19:04.031480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.866 [2024-07-25 12:19:04.095682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.866 [2024-07-25 12:19:04.095786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.436 12:19:04 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.436 12:19:04 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:31.436 12:19:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=222542 00:06:31.436 12:19:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:31.436 12:19:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:31.696 [ 00:06:31.696 "bdev_malloc_delete", 00:06:31.696 "bdev_malloc_create", 00:06:31.696 "bdev_null_resize", 00:06:31.696 "bdev_null_delete", 00:06:31.696 "bdev_null_create", 00:06:31.696 "bdev_nvme_cuse_unregister", 00:06:31.696 "bdev_nvme_cuse_register", 00:06:31.696 "bdev_opal_new_user", 00:06:31.696 "bdev_opal_set_lock_state", 00:06:31.696 "bdev_opal_delete", 00:06:31.696 "bdev_opal_get_info", 00:06:31.696 "bdev_opal_create", 00:06:31.696 "bdev_nvme_opal_revert", 00:06:31.696 "bdev_nvme_opal_init", 00:06:31.696 "bdev_nvme_send_cmd", 00:06:31.696 "bdev_nvme_get_path_iostat", 00:06:31.696 "bdev_nvme_get_mdns_discovery_info", 00:06:31.696 "bdev_nvme_stop_mdns_discovery", 00:06:31.696 "bdev_nvme_start_mdns_discovery", 00:06:31.696 "bdev_nvme_set_multipath_policy", 00:06:31.696 "bdev_nvme_set_preferred_path", 00:06:31.696 "bdev_nvme_get_io_paths", 00:06:31.696 "bdev_nvme_remove_error_injection", 00:06:31.696 "bdev_nvme_add_error_injection", 00:06:31.696 "bdev_nvme_get_discovery_info", 00:06:31.696 "bdev_nvme_stop_discovery", 00:06:31.696 "bdev_nvme_start_discovery", 00:06:31.696 "bdev_nvme_get_controller_health_info", 00:06:31.696 "bdev_nvme_disable_controller", 00:06:31.696 "bdev_nvme_enable_controller", 00:06:31.696 "bdev_nvme_reset_controller", 00:06:31.696 "bdev_nvme_get_transport_statistics", 00:06:31.696 "bdev_nvme_apply_firmware", 00:06:31.696 "bdev_nvme_detach_controller", 00:06:31.696 "bdev_nvme_get_controllers", 00:06:31.696 "bdev_nvme_attach_controller", 00:06:31.696 "bdev_nvme_set_hotplug", 00:06:31.696 "bdev_nvme_set_options", 00:06:31.696 "bdev_passthru_delete", 00:06:31.696 "bdev_passthru_create", 00:06:31.696 "bdev_lvol_set_parent_bdev", 00:06:31.696 "bdev_lvol_set_parent", 00:06:31.696 "bdev_lvol_check_shallow_copy", 00:06:31.696 "bdev_lvol_start_shallow_copy", 00:06:31.696 "bdev_lvol_grow_lvstore", 00:06:31.696 "bdev_lvol_get_lvols", 00:06:31.696 "bdev_lvol_get_lvstores", 00:06:31.696 "bdev_lvol_delete", 00:06:31.696 "bdev_lvol_set_read_only", 00:06:31.696 "bdev_lvol_resize", 00:06:31.696 "bdev_lvol_decouple_parent", 00:06:31.696 "bdev_lvol_inflate", 00:06:31.696 "bdev_lvol_rename", 00:06:31.696 "bdev_lvol_clone_bdev", 00:06:31.696 "bdev_lvol_clone", 00:06:31.696 "bdev_lvol_snapshot", 00:06:31.697 "bdev_lvol_create", 00:06:31.697 "bdev_lvol_delete_lvstore", 00:06:31.697 "bdev_lvol_rename_lvstore", 00:06:31.697 "bdev_lvol_create_lvstore", 00:06:31.697 "bdev_raid_set_options", 00:06:31.697 "bdev_raid_remove_base_bdev", 00:06:31.697 "bdev_raid_add_base_bdev", 00:06:31.697 "bdev_raid_delete", 00:06:31.697 "bdev_raid_create", 00:06:31.697 "bdev_raid_get_bdevs", 00:06:31.697 "bdev_error_inject_error", 00:06:31.697 "bdev_error_delete", 00:06:31.697 "bdev_error_create", 00:06:31.697 "bdev_split_delete", 00:06:31.697 "bdev_split_create", 00:06:31.697 "bdev_delay_delete", 00:06:31.697 "bdev_delay_create", 00:06:31.697 "bdev_delay_update_latency", 00:06:31.697 "bdev_zone_block_delete", 00:06:31.697 "bdev_zone_block_create", 00:06:31.697 "blobfs_create", 00:06:31.697 "blobfs_detect", 00:06:31.697 "blobfs_set_cache_size", 00:06:31.697 "bdev_aio_delete", 00:06:31.697 "bdev_aio_rescan", 00:06:31.697 "bdev_aio_create", 00:06:31.697 "bdev_ftl_set_property", 00:06:31.697 "bdev_ftl_get_properties", 00:06:31.697 "bdev_ftl_get_stats", 00:06:31.697 "bdev_ftl_unmap", 00:06:31.697 "bdev_ftl_unload", 00:06:31.697 "bdev_ftl_delete", 00:06:31.697 "bdev_ftl_load", 00:06:31.697 "bdev_ftl_create", 00:06:31.697 "bdev_virtio_attach_controller", 00:06:31.697 "bdev_virtio_scsi_get_devices", 00:06:31.697 "bdev_virtio_detach_controller", 00:06:31.697 "bdev_virtio_blk_set_hotplug", 00:06:31.697 "bdev_iscsi_delete", 00:06:31.697 "bdev_iscsi_create", 00:06:31.697 "bdev_iscsi_set_options", 00:06:31.697 "accel_error_inject_error", 00:06:31.697 "ioat_scan_accel_module", 00:06:31.697 "dsa_scan_accel_module", 00:06:31.697 "iaa_scan_accel_module", 00:06:31.697 "vfu_virtio_create_scsi_endpoint", 00:06:31.697 "vfu_virtio_scsi_remove_target", 00:06:31.697 "vfu_virtio_scsi_add_target", 00:06:31.697 "vfu_virtio_create_blk_endpoint", 00:06:31.697 "vfu_virtio_delete_endpoint", 00:06:31.697 "keyring_file_remove_key", 00:06:31.697 "keyring_file_add_key", 00:06:31.697 "keyring_linux_set_options", 00:06:31.697 "iscsi_get_histogram", 00:06:31.697 "iscsi_enable_histogram", 00:06:31.697 "iscsi_set_options", 00:06:31.697 "iscsi_get_auth_groups", 00:06:31.697 "iscsi_auth_group_remove_secret", 00:06:31.697 "iscsi_auth_group_add_secret", 00:06:31.697 "iscsi_delete_auth_group", 00:06:31.697 "iscsi_create_auth_group", 00:06:31.697 "iscsi_set_discovery_auth", 00:06:31.697 "iscsi_get_options", 00:06:31.697 "iscsi_target_node_request_logout", 00:06:31.697 "iscsi_target_node_set_redirect", 00:06:31.697 "iscsi_target_node_set_auth", 00:06:31.697 "iscsi_target_node_add_lun", 00:06:31.697 "iscsi_get_stats", 00:06:31.697 "iscsi_get_connections", 00:06:31.697 "iscsi_portal_group_set_auth", 00:06:31.697 "iscsi_start_portal_group", 00:06:31.697 "iscsi_delete_portal_group", 00:06:31.697 "iscsi_create_portal_group", 00:06:31.697 "iscsi_get_portal_groups", 00:06:31.697 "iscsi_delete_target_node", 00:06:31.697 "iscsi_target_node_remove_pg_ig_maps", 00:06:31.697 "iscsi_target_node_add_pg_ig_maps", 00:06:31.697 "iscsi_create_target_node", 00:06:31.697 "iscsi_get_target_nodes", 00:06:31.697 "iscsi_delete_initiator_group", 00:06:31.697 "iscsi_initiator_group_remove_initiators", 00:06:31.697 "iscsi_initiator_group_add_initiators", 00:06:31.697 "iscsi_create_initiator_group", 00:06:31.697 "iscsi_get_initiator_groups", 00:06:31.697 "nvmf_set_crdt", 00:06:31.697 "nvmf_set_config", 00:06:31.697 "nvmf_set_max_subsystems", 00:06:31.697 "nvmf_stop_mdns_prr", 00:06:31.697 "nvmf_publish_mdns_prr", 00:06:31.697 "nvmf_subsystem_get_listeners", 00:06:31.697 "nvmf_subsystem_get_qpairs", 00:06:31.697 "nvmf_subsystem_get_controllers", 00:06:31.697 "nvmf_get_stats", 00:06:31.697 "nvmf_get_transports", 00:06:31.697 "nvmf_create_transport", 00:06:31.697 "nvmf_get_targets", 00:06:31.697 "nvmf_delete_target", 00:06:31.697 "nvmf_create_target", 00:06:31.697 "nvmf_subsystem_allow_any_host", 00:06:31.697 "nvmf_subsystem_remove_host", 00:06:31.697 "nvmf_subsystem_add_host", 00:06:31.697 "nvmf_ns_remove_host", 00:06:31.697 "nvmf_ns_add_host", 00:06:31.697 "nvmf_subsystem_remove_ns", 00:06:31.697 "nvmf_subsystem_add_ns", 00:06:31.697 "nvmf_subsystem_listener_set_ana_state", 00:06:31.697 "nvmf_discovery_get_referrals", 00:06:31.697 "nvmf_discovery_remove_referral", 00:06:31.697 "nvmf_discovery_add_referral", 00:06:31.697 "nvmf_subsystem_remove_listener", 00:06:31.697 "nvmf_subsystem_add_listener", 00:06:31.697 "nvmf_delete_subsystem", 00:06:31.697 "nvmf_create_subsystem", 00:06:31.697 "nvmf_get_subsystems", 00:06:31.697 "env_dpdk_get_mem_stats", 00:06:31.697 "nbd_get_disks", 00:06:31.697 "nbd_stop_disk", 00:06:31.697 "nbd_start_disk", 00:06:31.697 "ublk_recover_disk", 00:06:31.697 "ublk_get_disks", 00:06:31.697 "ublk_stop_disk", 00:06:31.697 "ublk_start_disk", 00:06:31.697 "ublk_destroy_target", 00:06:31.697 "ublk_create_target", 00:06:31.697 "virtio_blk_create_transport", 00:06:31.697 "virtio_blk_get_transports", 00:06:31.697 "vhost_controller_set_coalescing", 00:06:31.697 "vhost_get_controllers", 00:06:31.697 "vhost_delete_controller", 00:06:31.697 "vhost_create_blk_controller", 00:06:31.697 "vhost_scsi_controller_remove_target", 00:06:31.697 "vhost_scsi_controller_add_target", 00:06:31.697 "vhost_start_scsi_controller", 00:06:31.697 "vhost_create_scsi_controller", 00:06:31.697 "thread_set_cpumask", 00:06:31.697 "framework_get_governor", 00:06:31.697 "framework_get_scheduler", 00:06:31.697 "framework_set_scheduler", 00:06:31.697 "framework_get_reactors", 00:06:31.697 "thread_get_io_channels", 00:06:31.697 "thread_get_pollers", 00:06:31.697 "thread_get_stats", 00:06:31.697 "framework_monitor_context_switch", 00:06:31.697 "spdk_kill_instance", 00:06:31.697 "log_enable_timestamps", 00:06:31.697 "log_get_flags", 00:06:31.697 "log_clear_flag", 00:06:31.697 "log_set_flag", 00:06:31.697 "log_get_level", 00:06:31.697 "log_set_level", 00:06:31.697 "log_get_print_level", 00:06:31.697 "log_set_print_level", 00:06:31.697 "framework_enable_cpumask_locks", 00:06:31.697 "framework_disable_cpumask_locks", 00:06:31.697 "framework_wait_init", 00:06:31.697 "framework_start_init", 00:06:31.697 "scsi_get_devices", 00:06:31.697 "bdev_get_histogram", 00:06:31.697 "bdev_enable_histogram", 00:06:31.697 "bdev_set_qos_limit", 00:06:31.697 "bdev_set_qd_sampling_period", 00:06:31.697 "bdev_get_bdevs", 00:06:31.697 "bdev_reset_iostat", 00:06:31.697 "bdev_get_iostat", 00:06:31.697 "bdev_examine", 00:06:31.697 "bdev_wait_for_examine", 00:06:31.697 "bdev_set_options", 00:06:31.697 "notify_get_notifications", 00:06:31.697 "notify_get_types", 00:06:31.697 "accel_get_stats", 00:06:31.697 "accel_set_options", 00:06:31.697 "accel_set_driver", 00:06:31.697 "accel_crypto_key_destroy", 00:06:31.697 "accel_crypto_keys_get", 00:06:31.697 "accel_crypto_key_create", 00:06:31.697 "accel_assign_opc", 00:06:31.697 "accel_get_module_info", 00:06:31.697 "accel_get_opc_assignments", 00:06:31.697 "vmd_rescan", 00:06:31.697 "vmd_remove_device", 00:06:31.697 "vmd_enable", 00:06:31.697 "sock_get_default_impl", 00:06:31.697 "sock_set_default_impl", 00:06:31.697 "sock_impl_set_options", 00:06:31.697 "sock_impl_get_options", 00:06:31.697 "iobuf_get_stats", 00:06:31.697 "iobuf_set_options", 00:06:31.697 "keyring_get_keys", 00:06:31.697 "framework_get_pci_devices", 00:06:31.697 "framework_get_config", 00:06:31.697 "framework_get_subsystems", 00:06:31.697 "vfu_tgt_set_base_path", 00:06:31.697 "trace_get_info", 00:06:31.697 "trace_get_tpoint_group_mask", 00:06:31.697 "trace_disable_tpoint_group", 00:06:31.697 "trace_enable_tpoint_group", 00:06:31.697 "trace_clear_tpoint_mask", 00:06:31.697 "trace_set_tpoint_mask", 00:06:31.697 "spdk_get_version", 00:06:31.697 "rpc_get_methods" 00:06:31.697 ] 00:06:31.697 12:19:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:31.697 12:19:04 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.697 12:19:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 12:19:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:31.697 12:19:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 222497 00:06:31.697 12:19:05 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 222497 ']' 00:06:31.697 12:19:05 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 222497 00:06:31.697 12:19:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:31.697 12:19:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.697 12:19:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 222497 00:06:31.697 12:19:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.697 12:19:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.697 12:19:05 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 222497' 00:06:31.697 killing process with pid 222497 00:06:31.697 12:19:05 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 222497 00:06:31.697 12:19:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 222497 00:06:31.957 00:06:31.957 real 0m1.495s 00:06:31.957 user 0m2.852s 00:06:31.957 sys 0m0.435s 00:06:31.957 12:19:05 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.957 12:19:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.957 ************************************ 00:06:31.957 END TEST spdkcli_tcp 00:06:31.957 ************************************ 00:06:31.957 12:19:05 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.957 12:19:05 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:31.957 12:19:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.957 12:19:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.957 12:19:05 -- common/autotest_common.sh@10 -- # set +x 00:06:31.957 ************************************ 00:06:31.957 START TEST dpdk_mem_utility 00:06:31.957 ************************************ 00:06:31.957 12:19:05 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.250 * Looking for test storage... 00:06:32.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:32.250 12:19:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:32.250 12:19:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=222878 00:06:32.250 12:19:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 222878 00:06:32.250 12:19:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.250 12:19:05 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 222878 ']' 00:06:32.250 12:19:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.250 12:19:05 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.250 12:19:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.250 12:19:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.250 12:19:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:32.250 [2024-07-25 12:19:05.516323] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:32.250 [2024-07-25 12:19:05.516403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222878 ] 00:06:32.250 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.250 [2024-07-25 12:19:05.600028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.250 [2024-07-25 12:19:05.668316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.196 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.196 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:33.196 12:19:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:33.196 12:19:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:33.196 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.196 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.196 { 00:06:33.196 "filename": "/tmp/spdk_mem_dump.txt" 00:06:33.196 } 00:06:33.196 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.196 12:19:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:33.196 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:33.196 1 heaps totaling size 814.000000 MiB 00:06:33.196 size: 814.000000 MiB heap id: 0 00:06:33.196 end heaps---------- 00:06:33.196 8 mempools totaling size 598.116089 MiB 00:06:33.196 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:33.196 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:33.196 size: 84.521057 MiB name: bdev_io_222878 00:06:33.196 size: 51.011292 MiB name: evtpool_222878 00:06:33.197 size: 50.003479 MiB name: msgpool_222878 00:06:33.197 size: 21.763794 MiB name: PDU_Pool 00:06:33.197 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:33.197 size: 0.026123 MiB name: Session_Pool 00:06:33.197 end mempools------- 00:06:33.197 6 memzones totaling size 4.142822 MiB 00:06:33.197 size: 1.000366 MiB name: RG_ring_0_222878 00:06:33.197 size: 1.000366 MiB name: RG_ring_1_222878 00:06:33.197 size: 1.000366 MiB name: RG_ring_4_222878 00:06:33.197 size: 1.000366 MiB name: RG_ring_5_222878 00:06:33.197 size: 0.125366 MiB name: RG_ring_2_222878 00:06:33.197 size: 0.015991 MiB name: RG_ring_3_222878 00:06:33.197 end memzones------- 00:06:33.197 12:19:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:33.197 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:33.197 list of free elements. size: 12.519348 MiB 00:06:33.197 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:33.197 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:33.197 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:33.197 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:33.197 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:33.197 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:33.197 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:33.197 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:33.197 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:33.197 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:33.197 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:33.197 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:33.197 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:33.197 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:33.197 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:33.197 list of standard malloc elements. size: 199.218079 MiB 00:06:33.197 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:33.197 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:33.197 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:33.197 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:33.197 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:33.197 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:33.197 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:33.197 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:33.197 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:33.197 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:33.197 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:33.197 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:33.197 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:33.197 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:33.197 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:33.197 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:33.197 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:33.197 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:33.197 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:33.197 list of memzone associated elements. size: 602.262573 MiB 00:06:33.197 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:33.197 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:33.197 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:33.197 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:33.197 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:33.197 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_222878_0 00:06:33.197 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:33.197 associated memzone info: size: 48.002930 MiB name: MP_evtpool_222878_0 00:06:33.197 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:33.197 associated memzone info: size: 48.002930 MiB name: MP_msgpool_222878_0 00:06:33.197 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:33.197 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:33.197 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:33.197 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:33.197 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:33.197 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_222878 00:06:33.197 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:33.197 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_222878 00:06:33.197 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:33.197 associated memzone info: size: 1.007996 MiB name: MP_evtpool_222878 00:06:33.197 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:33.197 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:33.197 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:33.197 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:33.197 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:33.197 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:33.197 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:33.197 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:33.197 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:33.197 associated memzone info: size: 1.000366 MiB name: RG_ring_0_222878 00:06:33.197 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:33.197 associated memzone info: size: 1.000366 MiB name: RG_ring_1_222878 00:06:33.197 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:33.197 associated memzone info: size: 1.000366 MiB name: RG_ring_4_222878 00:06:33.197 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:33.197 associated memzone info: size: 1.000366 MiB name: RG_ring_5_222878 00:06:33.197 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:33.197 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_222878 00:06:33.197 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:33.197 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:33.197 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:33.197 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:33.197 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:33.197 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:33.197 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:33.197 associated memzone info: size: 0.125366 MiB name: RG_ring_2_222878 00:06:33.197 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:33.197 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:33.197 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:33.197 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:33.197 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:33.197 associated memzone info: size: 0.015991 MiB name: RG_ring_3_222878 00:06:33.197 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:33.197 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:33.197 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:33.197 associated memzone info: size: 0.000183 MiB name: MP_msgpool_222878 00:06:33.197 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:33.197 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_222878 00:06:33.197 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:33.197 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:33.197 12:19:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:33.197 12:19:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 222878 00:06:33.197 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 222878 ']' 00:06:33.197 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 222878 00:06:33.197 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:33.197 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.197 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 222878 00:06:33.197 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.197 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.198 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 222878' 00:06:33.198 killing process with pid 222878 00:06:33.198 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 222878 00:06:33.198 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 222878 00:06:33.457 00:06:33.457 real 0m1.356s 00:06:33.457 user 0m1.489s 00:06:33.457 sys 0m0.390s 00:06:33.457 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.457 12:19:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.457 ************************************ 00:06:33.457 END TEST dpdk_mem_utility 00:06:33.457 ************************************ 00:06:33.457 12:19:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.457 12:19:06 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:33.457 12:19:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.457 12:19:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.457 12:19:06 -- common/autotest_common.sh@10 -- # set +x 00:06:33.458 ************************************ 00:06:33.458 START TEST event 00:06:33.458 ************************************ 00:06:33.458 12:19:06 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:33.718 * Looking for test storage... 00:06:33.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:33.718 12:19:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:33.718 12:19:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:33.718 12:19:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:33.718 12:19:06 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:33.718 12:19:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.718 12:19:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.718 ************************************ 00:06:33.718 START TEST event_perf 00:06:33.718 ************************************ 00:06:33.718 12:19:06 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:33.718 Running I/O for 1 seconds...[2024-07-25 12:19:06.953614] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:33.718 [2024-07-25 12:19:06.953709] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223135 ] 00:06:33.718 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.718 [2024-07-25 12:19:07.043577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.718 [2024-07-25 12:19:07.115296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.718 [2024-07-25 12:19:07.115445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.718 [2024-07-25 12:19:07.115560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.718 [2024-07-25 12:19:07.115570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.097 Running I/O for 1 seconds... 00:06:35.097 lcore 0: 77136 00:06:35.097 lcore 1: 77139 00:06:35.097 lcore 2: 77143 00:06:35.097 lcore 3: 77139 00:06:35.097 done. 00:06:35.097 00:06:35.097 real 0m1.235s 00:06:35.097 user 0m4.125s 00:06:35.097 sys 0m0.105s 00:06:35.097 12:19:08 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.097 12:19:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.097 ************************************ 00:06:35.097 END TEST event_perf 00:06:35.097 ************************************ 00:06:35.097 12:19:08 event -- common/autotest_common.sh@1142 -- # return 0 00:06:35.097 12:19:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:35.097 12:19:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:35.097 12:19:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.097 12:19:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.097 ************************************ 00:06:35.097 START TEST event_reactor 00:06:35.097 ************************************ 00:06:35.097 12:19:08 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:35.097 [2024-07-25 12:19:08.262475] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:35.098 [2024-07-25 12:19:08.262676] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223291 ] 00:06:35.098 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.098 [2024-07-25 12:19:08.350237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.098 [2024-07-25 12:19:08.427044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.479 test_start 00:06:36.479 oneshot 00:06:36.479 tick 100 00:06:36.479 tick 100 00:06:36.479 tick 250 00:06:36.479 tick 100 00:06:36.479 tick 100 00:06:36.479 tick 250 00:06:36.479 tick 100 00:06:36.479 tick 500 00:06:36.479 tick 100 00:06:36.479 tick 100 00:06:36.479 tick 250 00:06:36.479 tick 100 00:06:36.479 tick 100 00:06:36.479 test_end 00:06:36.479 00:06:36.479 real 0m1.235s 00:06:36.479 user 0m1.143s 00:06:36.479 sys 0m0.087s 00:06:36.479 12:19:09 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.479 12:19:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:36.479 ************************************ 00:06:36.479 END TEST event_reactor 00:06:36.479 ************************************ 00:06:36.479 12:19:09 event -- common/autotest_common.sh@1142 -- # return 0 00:06:36.479 12:19:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:36.479 12:19:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:36.479 12:19:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.479 12:19:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.479 ************************************ 00:06:36.479 START TEST event_reactor_perf 00:06:36.479 ************************************ 00:06:36.479 12:19:09 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:36.479 [2024-07-25 12:19:09.573971] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:36.479 [2024-07-25 12:19:09.574061] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223600 ] 00:06:36.479 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.479 [2024-07-25 12:19:09.661316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.479 [2024-07-25 12:19:09.725404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.418 test_start 00:06:37.418 test_end 00:06:37.418 Performance: 401678 events per second 00:06:37.418 00:06:37.418 real 0m1.224s 00:06:37.418 user 0m1.127s 00:06:37.418 sys 0m0.092s 00:06:37.418 12:19:10 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.418 12:19:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.418 ************************************ 00:06:37.418 END TEST event_reactor_perf 00:06:37.418 ************************************ 00:06:37.418 12:19:10 event -- common/autotest_common.sh@1142 -- # return 0 00:06:37.418 12:19:10 event -- event/event.sh@49 -- # uname -s 00:06:37.418 12:19:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:37.418 12:19:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:37.418 12:19:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.418 12:19:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.418 12:19:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.679 ************************************ 00:06:37.679 START TEST event_scheduler 00:06:37.679 ************************************ 00:06:37.679 12:19:10 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:37.679 * Looking for test storage... 00:06:37.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:37.679 12:19:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:37.679 12:19:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=223943 00:06:37.679 12:19:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.679 12:19:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:37.679 12:19:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 223943 00:06:37.679 12:19:10 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 223943 ']' 00:06:37.679 12:19:10 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.679 12:19:10 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.679 12:19:10 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.679 12:19:10 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.679 12:19:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.679 [2024-07-25 12:19:11.000764] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:37.679 [2024-07-25 12:19:11.000814] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223943 ] 00:06:37.679 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.939 [2024-07-25 12:19:11.130499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.939 [2024-07-25 12:19:11.293624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.939 [2024-07-25 12:19:11.293909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.939 [2024-07-25 12:19:11.294055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.939 [2024-07-25 12:19:11.294075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.508 12:19:11 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.508 12:19:11 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:38.508 12:19:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:38.508 12:19:11 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.508 12:19:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.508 [2024-07-25 12:19:11.865210] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:38.508 [2024-07-25 12:19:11.865259] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:38.508 [2024-07-25 12:19:11.865297] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:38.508 [2024-07-25 12:19:11.865324] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:38.508 [2024-07-25 12:19:11.865350] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:38.508 12:19:11 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.508 12:19:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:38.508 12:19:11 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.508 12:19:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 [2024-07-25 12:19:11.964540] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:38.769 12:19:11 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.769 12:19:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:38.769 12:19:11 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.769 12:19:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.769 12:19:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 ************************************ 00:06:38.769 START TEST scheduler_create_thread 00:06:38.769 ************************************ 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 2 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 3 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 4 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 5 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 6 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 7 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 8 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 9 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.769 10 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.769 12:19:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.150 12:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.150 12:19:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:40.150 12:19:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:40.150 12:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.150 12:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.089 12:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.089 12:19:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:41.089 12:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.089 12:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.026 12:19:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.026 12:19:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:42.026 12:19:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:42.026 12:19:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.026 12:19:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.595 12:19:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.595 00:06:42.595 real 0m3.897s 00:06:42.595 user 0m0.026s 00:06:42.595 sys 0m0.006s 00:06:42.595 12:19:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.595 12:19:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.595 ************************************ 00:06:42.595 END TEST scheduler_create_thread 00:06:42.595 ************************************ 00:06:42.595 12:19:15 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:42.595 12:19:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:42.595 12:19:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 223943 00:06:42.595 12:19:15 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 223943 ']' 00:06:42.595 12:19:15 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 223943 00:06:42.595 12:19:15 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:42.595 12:19:15 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.595 12:19:15 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 223943 00:06:42.595 12:19:15 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:42.595 12:19:15 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:42.595 12:19:15 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 223943' 00:06:42.595 killing process with pid 223943 00:06:42.595 12:19:15 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 223943 00:06:42.595 12:19:15 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 223943 00:06:43.164 [2024-07-25 12:19:16.282393] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:43.424 00:06:43.424 real 0m5.766s 00:06:43.424 user 0m12.391s 00:06:43.424 sys 0m0.445s 00:06:43.424 12:19:16 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.424 12:19:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.424 ************************************ 00:06:43.424 END TEST event_scheduler 00:06:43.424 ************************************ 00:06:43.424 12:19:16 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.424 12:19:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:43.424 12:19:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:43.424 12:19:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.424 12:19:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.424 12:19:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.424 ************************************ 00:06:43.424 START TEST app_repeat 00:06:43.424 ************************************ 00:06:43.424 12:19:16 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=224916 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 224916' 00:06:43.424 Process app_repeat pid: 224916 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:43.424 spdk_app_start Round 0 00:06:43.424 12:19:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 224916 /var/tmp/spdk-nbd.sock 00:06:43.424 12:19:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 224916 ']' 00:06:43.424 12:19:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.424 12:19:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.424 12:19:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.424 12:19:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.424 12:19:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.424 [2024-07-25 12:19:16.745332] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:06:43.424 [2024-07-25 12:19:16.745392] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224916 ] 00:06:43.424 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.424 [2024-07-25 12:19:16.827279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.684 [2024-07-25 12:19:16.891777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.684 [2024-07-25 12:19:16.891861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.684 12:19:16 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.684 12:19:16 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:43.684 12:19:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.943 Malloc0 00:06:43.943 12:19:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.943 Malloc1 00:06:43.943 12:19:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.943 12:19:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.202 /dev/nbd0 00:06:44.202 12:19:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.202 12:19:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.202 1+0 records in 00:06:44.202 1+0 records out 00:06:44.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272288 s, 15.0 MB/s 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.202 12:19:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:44.202 12:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.202 12:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.202 12:19:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.462 /dev/nbd1 00:06:44.462 12:19:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.462 12:19:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.462 1+0 records in 00:06:44.462 1+0 records out 00:06:44.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205123 s, 20.0 MB/s 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.462 12:19:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:44.462 12:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.462 12:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.462 12:19:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.462 12:19:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.462 12:19:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.723 12:19:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:44.723 { 00:06:44.723 "nbd_device": "/dev/nbd0", 00:06:44.723 "bdev_name": "Malloc0" 00:06:44.723 }, 00:06:44.723 { 00:06:44.723 "nbd_device": "/dev/nbd1", 00:06:44.723 "bdev_name": "Malloc1" 00:06:44.723 } 00:06:44.723 ]' 00:06:44.723 12:19:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:44.723 { 00:06:44.723 "nbd_device": "/dev/nbd0", 00:06:44.723 "bdev_name": "Malloc0" 00:06:44.723 }, 00:06:44.723 { 00:06:44.723 "nbd_device": "/dev/nbd1", 00:06:44.723 "bdev_name": "Malloc1" 00:06:44.723 } 00:06:44.723 ]' 00:06:44.723 12:19:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.723 /dev/nbd1' 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.723 /dev/nbd1' 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.723 256+0 records in 00:06:44.723 256+0 records out 00:06:44.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462711 s, 227 MB/s 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.723 256+0 records in 00:06:44.723 256+0 records out 00:06:44.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147048 s, 71.3 MB/s 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.723 256+0 records in 00:06:44.723 256+0 records out 00:06:44.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151093 s, 69.4 MB/s 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.723 12:19:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.983 12:19:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.983 12:19:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.983 12:19:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.983 12:19:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.983 12:19:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.983 12:19:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.983 12:19:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.984 12:19:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.984 12:19:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.984 12:19:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.245 12:19:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.245 12:19:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.245 12:19:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.245 12:19:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.245 12:19:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.245 12:19:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.245 12:19:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.245 12:19:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.245 12:19:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.245 12:19:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.245 12:19:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.506 12:19:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.506 12:19:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.766 12:19:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.026 [2024-07-25 12:19:19.208159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.026 [2024-07-25 12:19:19.269819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.026 [2024-07-25 12:19:19.269823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.026 [2024-07-25 12:19:19.300116] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.026 [2024-07-25 12:19:19.300157] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.317 12:19:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:49.317 12:19:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:49.317 spdk_app_start Round 1 00:06:49.317 12:19:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 224916 /var/tmp/spdk-nbd.sock 00:06:49.317 12:19:22 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 224916 ']' 00:06:49.317 12:19:22 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.317 12:19:22 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.317 12:19:22 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.317 12:19:22 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.317 12:19:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.317 12:19:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.317 12:19:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:49.317 12:19:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.317 Malloc0 00:06:49.317 12:19:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.317 Malloc1 00:06:49.317 12:19:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.317 12:19:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.577 /dev/nbd0 00:06:49.577 12:19:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.577 12:19:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.577 1+0 records in 00:06:49.577 1+0 records out 00:06:49.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224194 s, 18.3 MB/s 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.577 12:19:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.577 12:19:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.577 12:19:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.577 12:19:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.837 /dev/nbd1 00:06:49.837 12:19:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.837 12:19:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.837 1+0 records in 00:06:49.837 1+0 records out 00:06:49.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273554 s, 15.0 MB/s 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.837 12:19:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.837 12:19:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.837 12:19:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.837 12:19:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.837 12:19:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.837 12:19:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.098 12:19:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.098 { 00:06:50.098 "nbd_device": "/dev/nbd0", 00:06:50.098 "bdev_name": "Malloc0" 00:06:50.098 }, 00:06:50.098 { 00:06:50.098 "nbd_device": "/dev/nbd1", 00:06:50.099 "bdev_name": "Malloc1" 00:06:50.099 } 00:06:50.099 ]' 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.099 { 00:06:50.099 "nbd_device": "/dev/nbd0", 00:06:50.099 "bdev_name": "Malloc0" 00:06:50.099 }, 00:06:50.099 { 00:06:50.099 "nbd_device": "/dev/nbd1", 00:06:50.099 "bdev_name": "Malloc1" 00:06:50.099 } 00:06:50.099 ]' 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.099 /dev/nbd1' 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.099 /dev/nbd1' 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.099 256+0 records in 00:06:50.099 256+0 records out 00:06:50.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118936 s, 88.2 MB/s 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.099 256+0 records in 00:06:50.099 256+0 records out 00:06:50.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141453 s, 74.1 MB/s 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.099 256+0 records in 00:06:50.099 256+0 records out 00:06:50.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157862 s, 66.4 MB/s 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.099 12:19:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.361 12:19:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.361 12:19:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.361 12:19:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.361 12:19:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.361 12:19:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.361 12:19:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.361 12:19:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.361 12:19:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.361 12:19:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.361 12:19:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.621 12:19:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.621 12:19:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.621 12:19:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.621 12:19:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.621 12:19:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.621 12:19:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.621 12:19:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.621 12:19:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.621 12:19:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.621 12:19:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.621 12:19:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.880 12:19:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.880 12:19:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.140 12:19:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:51.140 [2024-07-25 12:19:24.535732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.399 [2024-07-25 12:19:24.597003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.399 [2024-07-25 12:19:24.597007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.399 [2024-07-25 12:19:24.627827] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.399 [2024-07-25 12:19:24.627863] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.721 12:19:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.721 12:19:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:54.721 spdk_app_start Round 2 00:06:54.721 12:19:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 224916 /var/tmp/spdk-nbd.sock 00:06:54.721 12:19:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 224916 ']' 00:06:54.721 12:19:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.721 12:19:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.721 12:19:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.721 12:19:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.721 12:19:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.721 12:19:27 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.721 12:19:27 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:54.721 12:19:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.721 Malloc0 00:06:54.721 12:19:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.721 Malloc1 00:06:54.721 12:19:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.721 12:19:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.981 /dev/nbd0 00:06:54.981 12:19:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.981 12:19:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.981 12:19:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:54.981 12:19:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:54.981 12:19:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:54.981 12:19:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:54.981 12:19:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:54.981 12:19:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:54.981 12:19:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:54.981 12:19:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:54.981 12:19:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.982 1+0 records in 00:06:54.982 1+0 records out 00:06:54.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283339 s, 14.5 MB/s 00:06:54.982 12:19:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.982 12:19:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:54.982 12:19:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.982 12:19:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:54.982 12:19:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:54.982 12:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.982 12:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.982 12:19:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:55.242 /dev/nbd1 00:06:55.242 12:19:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:55.242 12:19:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.242 1+0 records in 00:06:55.242 1+0 records out 00:06:55.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209392 s, 19.6 MB/s 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:55.242 12:19:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:55.242 12:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.242 12:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.242 12:19:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.242 12:19:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.242 12:19:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.512 { 00:06:55.512 "nbd_device": "/dev/nbd0", 00:06:55.512 "bdev_name": "Malloc0" 00:06:55.512 }, 00:06:55.512 { 00:06:55.512 "nbd_device": "/dev/nbd1", 00:06:55.512 "bdev_name": "Malloc1" 00:06:55.512 } 00:06:55.512 ]' 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.512 { 00:06:55.512 "nbd_device": "/dev/nbd0", 00:06:55.512 "bdev_name": "Malloc0" 00:06:55.512 }, 00:06:55.512 { 00:06:55.512 "nbd_device": "/dev/nbd1", 00:06:55.512 "bdev_name": "Malloc1" 00:06:55.512 } 00:06:55.512 ]' 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.512 /dev/nbd1' 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.512 /dev/nbd1' 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:55.512 256+0 records in 00:06:55.512 256+0 records out 00:06:55.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120036 s, 87.4 MB/s 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:55.512 256+0 records in 00:06:55.512 256+0 records out 00:06:55.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148726 s, 70.5 MB/s 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:55.512 256+0 records in 00:06:55.512 256+0 records out 00:06:55.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151795 s, 69.1 MB/s 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.512 12:19:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.787 12:19:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.787 12:19:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.787 12:19:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.787 12:19:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.787 12:19:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.787 12:19:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.787 12:19:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.787 12:19:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.787 12:19:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.787 12:19:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:56.058 12:19:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:56.058 12:19:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:56.058 12:19:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:56.058 12:19:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.058 12:19:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.058 12:19:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:56.058 12:19:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.058 12:19:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.058 12:19:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.058 12:19:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.058 12:19:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:56.318 12:19:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:56.318 12:19:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:56.578 12:19:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:56.578 [2024-07-25 12:19:29.908248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.578 [2024-07-25 12:19:29.969849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.578 [2024-07-25 12:19:29.969853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.839 [2024-07-25 12:19:30.000025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:56.839 [2024-07-25 12:19:30.000060] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.136 12:19:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 224916 /var/tmp/spdk-nbd.sock 00:07:00.136 12:19:32 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 224916 ']' 00:07:00.136 12:19:32 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.136 12:19:32 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.136 12:19:32 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.136 12:19:32 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.136 12:19:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:00.136 12:19:33 event.app_repeat -- event/event.sh@39 -- # killprocess 224916 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 224916 ']' 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 224916 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 224916 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 224916' 00:07:00.136 killing process with pid 224916 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@967 -- # kill 224916 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@972 -- # wait 224916 00:07:00.136 spdk_app_start is called in Round 0. 00:07:00.136 Shutdown signal received, stop current app iteration 00:07:00.136 Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 reinitialization... 00:07:00.136 spdk_app_start is called in Round 1. 00:07:00.136 Shutdown signal received, stop current app iteration 00:07:00.136 Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 reinitialization... 00:07:00.136 spdk_app_start is called in Round 2. 00:07:00.136 Shutdown signal received, stop current app iteration 00:07:00.136 Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 reinitialization... 00:07:00.136 spdk_app_start is called in Round 3. 00:07:00.136 Shutdown signal received, stop current app iteration 00:07:00.136 12:19:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:00.136 12:19:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:00.136 00:07:00.136 real 0m16.467s 00:07:00.136 user 0m36.590s 00:07:00.136 sys 0m2.349s 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.136 12:19:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.136 ************************************ 00:07:00.136 END TEST app_repeat 00:07:00.136 ************************************ 00:07:00.136 12:19:33 event -- common/autotest_common.sh@1142 -- # return 0 00:07:00.136 12:19:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:00.136 12:19:33 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:00.136 12:19:33 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.136 12:19:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.136 12:19:33 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.136 ************************************ 00:07:00.136 START TEST cpu_locks 00:07:00.136 ************************************ 00:07:00.136 12:19:33 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:00.136 * Looking for test storage... 00:07:00.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:00.137 12:19:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:00.137 12:19:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:00.137 12:19:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:00.137 12:19:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:00.137 12:19:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.137 12:19:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.137 12:19:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.137 ************************************ 00:07:00.137 START TEST default_locks 00:07:00.137 ************************************ 00:07:00.137 12:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:00.137 12:19:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=227896 00:07:00.137 12:19:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 227896 00:07:00.137 12:19:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.137 12:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 227896 ']' 00:07:00.137 12:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.137 12:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.137 12:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.137 12:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.137 12:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.137 [2024-07-25 12:19:33.447857] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:00.137 [2024-07-25 12:19:33.447927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227896 ] 00:07:00.137 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.137 [2024-07-25 12:19:33.533009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.396 [2024-07-25 12:19:33.601460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.966 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.966 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:00.966 12:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 227896 00:07:00.966 12:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 227896 00:07:00.966 12:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.537 lslocks: write error 00:07:01.537 12:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 227896 00:07:01.537 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 227896 ']' 00:07:01.537 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 227896 00:07:01.537 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:01.537 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.537 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 227896 00:07:01.537 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.537 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.537 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 227896' 00:07:01.537 killing process with pid 227896 00:07:01.537 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 227896 00:07:01.537 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 227896 00:07:01.797 12:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 227896 00:07:01.797 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:01.797 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 227896 00:07:01.797 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:01.797 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.797 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:01.797 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 227896 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 227896 ']' 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (227896) - No such process 00:07:01.798 ERROR: process (pid: 227896) is no longer running 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:01.798 00:07:01.798 real 0m1.601s 00:07:01.798 user 0m1.744s 00:07:01.798 sys 0m0.548s 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.798 12:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.798 ************************************ 00:07:01.798 END TEST default_locks 00:07:01.798 ************************************ 00:07:01.798 12:19:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:01.798 12:19:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:01.798 12:19:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.798 12:19:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.798 12:19:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.798 ************************************ 00:07:01.798 START TEST default_locks_via_rpc 00:07:01.798 ************************************ 00:07:01.798 12:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:01.798 12:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=228234 00:07:01.798 12:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 228234 00:07:01.798 12:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.798 12:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 228234 ']' 00:07:01.798 12:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.798 12:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.798 12:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.798 12:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.798 12:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.798 [2024-07-25 12:19:35.131691] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:01.798 [2024-07-25 12:19:35.131742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228234 ] 00:07:01.798 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.798 [2024-07-25 12:19:35.212471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.058 [2024-07-25 12:19:35.275537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 228234 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 228234 00:07:03.010 12:19:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.579 12:19:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 228234 00:07:03.579 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 228234 ']' 00:07:03.579 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 228234 00:07:03.579 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:03.579 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.579 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 228234 00:07:03.579 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.579 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.579 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 228234' 00:07:03.579 killing process with pid 228234 00:07:03.579 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 228234 00:07:03.579 12:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 228234 00:07:03.838 00:07:03.838 real 0m2.062s 00:07:03.838 user 0m2.561s 00:07:03.838 sys 0m0.617s 00:07:03.838 12:19:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.838 12:19:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.838 ************************************ 00:07:03.838 END TEST default_locks_via_rpc 00:07:03.838 ************************************ 00:07:03.838 12:19:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:03.838 12:19:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:03.838 12:19:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.838 12:19:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.838 12:19:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.838 ************************************ 00:07:03.838 START TEST non_locking_app_on_locked_coremask 00:07:03.838 ************************************ 00:07:03.838 12:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:03.838 12:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=228588 00:07:03.838 12:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 228588 /var/tmp/spdk.sock 00:07:03.838 12:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.838 12:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 228588 ']' 00:07:03.838 12:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.838 12:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.838 12:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.838 12:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.838 12:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.098 [2024-07-25 12:19:37.259711] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:04.098 [2024-07-25 12:19:37.259764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228588 ] 00:07:04.098 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.098 [2024-07-25 12:19:37.342297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.098 [2024-07-25 12:19:37.408003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.036 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.036 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:05.036 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:05.036 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=228871 00:07:05.036 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 228871 /var/tmp/spdk2.sock 00:07:05.036 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 228871 ']' 00:07:05.036 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.036 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.036 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.036 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.036 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.036 [2024-07-25 12:19:38.126697] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:05.036 [2024-07-25 12:19:38.126746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228871 ] 00:07:05.036 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.036 [2024-07-25 12:19:38.217842] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.036 [2024-07-25 12:19:38.217865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.036 [2024-07-25 12:19:38.340486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.605 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.605 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:05.605 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 228588 00:07:05.605 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 228588 00:07:05.605 12:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.175 lslocks: write error 00:07:06.175 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 228588 00:07:06.175 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 228588 ']' 00:07:06.175 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 228588 00:07:06.175 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:06.175 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.175 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 228588 00:07:06.175 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.175 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.175 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 228588' 00:07:06.175 killing process with pid 228588 00:07:06.175 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 228588 00:07:06.175 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 228588 00:07:06.435 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 228871 00:07:06.435 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 228871 ']' 00:07:06.435 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 228871 00:07:06.435 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:06.435 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.435 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 228871 00:07:06.435 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.435 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.435 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 228871' 00:07:06.435 killing process with pid 228871 00:07:06.435 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 228871 00:07:06.435 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 228871 00:07:06.695 00:07:06.695 real 0m2.791s 00:07:06.695 user 0m3.134s 00:07:06.695 sys 0m0.797s 00:07:06.695 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.695 12:19:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.695 ************************************ 00:07:06.695 END TEST non_locking_app_on_locked_coremask 00:07:06.695 ************************************ 00:07:06.695 12:19:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:06.695 12:19:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:06.695 12:19:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.695 12:19:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.695 12:19:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.695 ************************************ 00:07:06.695 START TEST locking_app_on_unlocked_coremask 00:07:06.695 ************************************ 00:07:06.695 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:06.695 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=229215 00:07:06.695 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 229215 /var/tmp/spdk.sock 00:07:06.695 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:06.695 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 229215 ']' 00:07:06.695 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.695 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.695 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.695 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.695 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.955 [2024-07-25 12:19:40.136629] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:06.955 [2024-07-25 12:19:40.136702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229215 ] 00:07:06.955 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.955 [2024-07-25 12:19:40.218941] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.955 [2024-07-25 12:19:40.218979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.955 [2024-07-25 12:19:40.296685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.215 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.215 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:07.215 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=229219 00:07:07.215 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 229219 /var/tmp/spdk2.sock 00:07:07.215 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 229219 ']' 00:07:07.215 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:07.215 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.215 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.215 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.215 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.215 12:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.215 [2024-07-25 12:19:40.505767] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:07.215 [2024-07-25 12:19:40.505815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229219 ] 00:07:07.215 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.215 [2024-07-25 12:19:40.595539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.475 [2024-07-25 12:19:40.721908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.046 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.046 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:08.046 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 229219 00:07:08.046 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 229219 00:07:08.046 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.306 lslocks: write error 00:07:08.306 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 229215 00:07:08.306 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 229215 ']' 00:07:08.306 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 229215 00:07:08.306 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:08.306 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.306 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 229215 00:07:08.306 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.306 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.306 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 229215' 00:07:08.306 killing process with pid 229215 00:07:08.306 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 229215 00:07:08.306 12:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 229215 00:07:08.876 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 229219 00:07:08.876 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 229219 ']' 00:07:08.876 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 229219 00:07:08.876 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:08.876 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.876 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 229219 00:07:08.876 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.876 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.876 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 229219' 00:07:08.876 killing process with pid 229219 00:07:08.876 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 229219 00:07:08.876 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 229219 00:07:09.136 00:07:09.136 real 0m2.279s 00:07:09.136 user 0m2.579s 00:07:09.136 sys 0m0.770s 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.136 ************************************ 00:07:09.136 END TEST locking_app_on_unlocked_coremask 00:07:09.136 ************************************ 00:07:09.136 12:19:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:09.136 12:19:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:09.136 12:19:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.136 12:19:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.136 12:19:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.136 ************************************ 00:07:09.136 START TEST locking_app_on_locked_coremask 00:07:09.136 ************************************ 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=229561 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 229561 /var/tmp/spdk.sock 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 229561 ']' 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.136 12:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.136 [2024-07-25 12:19:42.477109] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:09.136 [2024-07-25 12:19:42.477156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229561 ] 00:07:09.136 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.397 [2024-07-25 12:19:42.557301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.397 [2024-07-25 12:19:42.621271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=229861 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 229861 /var/tmp/spdk2.sock 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 229861 /var/tmp/spdk2.sock 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 229861 /var/tmp/spdk2.sock 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 229861 ']' 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.967 12:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.967 [2024-07-25 12:19:43.351255] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:09.967 [2024-07-25 12:19:43.351305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229861 ] 00:07:09.967 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.227 [2024-07-25 12:19:43.447814] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 229561 has claimed it. 00:07:10.227 [2024-07-25 12:19:43.447853] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (229861) - No such process 00:07:10.798 ERROR: process (pid: 229861) is no longer running 00:07:10.798 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.798 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:10.798 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:10.798 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.798 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.798 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.798 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 229561 00:07:10.798 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 229561 00:07:10.798 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.059 lslocks: write error 00:07:11.059 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 229561 00:07:11.059 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 229561 ']' 00:07:11.059 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 229561 00:07:11.059 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:11.059 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.059 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 229561 00:07:11.059 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.059 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.059 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 229561' 00:07:11.059 killing process with pid 229561 00:07:11.059 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 229561 00:07:11.059 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 229561 00:07:11.319 00:07:11.319 real 0m2.115s 00:07:11.319 user 0m2.440s 00:07:11.319 sys 0m0.539s 00:07:11.319 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.319 12:19:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.319 ************************************ 00:07:11.319 END TEST locking_app_on_locked_coremask 00:07:11.319 ************************************ 00:07:11.319 12:19:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:11.319 12:19:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:11.319 12:19:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.319 12:19:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.319 12:19:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.319 ************************************ 00:07:11.319 START TEST locking_overlapped_coremask 00:07:11.319 ************************************ 00:07:11.319 12:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:11.319 12:19:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=229932 00:07:11.319 12:19:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 229932 /var/tmp/spdk.sock 00:07:11.319 12:19:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:11.319 12:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 229932 ']' 00:07:11.319 12:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.319 12:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.319 12:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.319 12:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.319 12:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.319 [2024-07-25 12:19:44.670774] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:11.319 [2024-07-25 12:19:44.670825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229932 ] 00:07:11.319 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.579 [2024-07-25 12:19:44.754134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.579 [2024-07-25 12:19:44.819556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.579 [2024-07-25 12:19:44.819706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.579 [2024-07-25 12:19:44.819790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=230209 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 230209 /var/tmp/spdk2.sock 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 230209 /var/tmp/spdk2.sock 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 230209 /var/tmp/spdk2.sock 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 230209 ']' 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.148 12:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.148 [2024-07-25 12:19:45.561633] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:12.148 [2024-07-25 12:19:45.561688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230209 ] 00:07:12.409 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.409 [2024-07-25 12:19:45.784333] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 229932 has claimed it. 00:07:12.409 [2024-07-25 12:19:45.784436] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:12.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (230209) - No such process 00:07:12.979 ERROR: process (pid: 230209) is no longer running 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 229932 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 229932 ']' 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 229932 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 229932 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 229932' 00:07:12.979 killing process with pid 229932 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 229932 00:07:12.979 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 229932 00:07:13.240 00:07:13.240 real 0m1.826s 00:07:13.240 user 0m5.170s 00:07:13.240 sys 0m0.438s 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.240 ************************************ 00:07:13.240 END TEST locking_overlapped_coremask 00:07:13.240 ************************************ 00:07:13.240 12:19:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:13.240 12:19:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:13.240 12:19:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.240 12:19:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.240 12:19:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.240 ************************************ 00:07:13.240 START TEST locking_overlapped_coremask_via_rpc 00:07:13.240 ************************************ 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=230271 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 230271 /var/tmp/spdk.sock 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 230271 ']' 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.240 12:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.240 [2024-07-25 12:19:46.568376] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:13.240 [2024-07-25 12:19:46.568424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230271 ] 00:07:13.240 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.240 [2024-07-25 12:19:46.649065] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.240 [2024-07-25 12:19:46.649098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.500 [2024-07-25 12:19:46.723854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.500 [2024-07-25 12:19:46.724002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.500 [2024-07-25 12:19:46.724004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.069 12:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.069 12:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:14.069 12:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=230551 00:07:14.069 12:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 230551 /var/tmp/spdk2.sock 00:07:14.069 12:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 230551 ']' 00:07:14.069 12:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:14.069 12:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.069 12:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.069 12:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.069 12:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.069 12:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.069 [2024-07-25 12:19:47.472327] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:14.069 [2024-07-25 12:19:47.472378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230551 ] 00:07:14.329 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.329 [2024-07-25 12:19:47.696248] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.329 [2024-07-25 12:19:47.696309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.589 [2024-07-25 12:19:47.982807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.589 [2024-07-25 12:19:47.982959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:14.589 [2024-07-25 12:19:47.982966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.160 [2024-07-25 12:19:48.445761] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 230271 has claimed it. 00:07:15.160 request: 00:07:15.160 { 00:07:15.160 "method": "framework_enable_cpumask_locks", 00:07:15.160 "req_id": 1 00:07:15.160 } 00:07:15.160 Got JSON-RPC error response 00:07:15.160 response: 00:07:15.160 { 00:07:15.160 "code": -32603, 00:07:15.160 "message": "Failed to claim CPU core: 2" 00:07:15.160 } 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 230271 /var/tmp/spdk.sock 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 230271 ']' 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.160 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 230551 /var/tmp/spdk2.sock 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 230551 ']' 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.421 00:07:15.421 real 0m2.323s 00:07:15.421 user 0m0.980s 00:07:15.421 sys 0m0.161s 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.421 12:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.421 ************************************ 00:07:15.421 END TEST locking_overlapped_coremask_via_rpc 00:07:15.421 ************************************ 00:07:15.681 12:19:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:15.681 12:19:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:15.681 12:19:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 230271 ]] 00:07:15.681 12:19:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 230271 00:07:15.681 12:19:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 230271 ']' 00:07:15.681 12:19:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 230271 00:07:15.681 12:19:48 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:15.681 12:19:48 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.681 12:19:48 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 230271 00:07:15.681 12:19:48 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.681 12:19:48 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.681 12:19:48 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 230271' 00:07:15.681 killing process with pid 230271 00:07:15.682 12:19:48 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 230271 00:07:15.682 12:19:48 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 230271 00:07:15.942 12:19:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 230551 ]] 00:07:15.942 12:19:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 230551 00:07:15.942 12:19:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 230551 ']' 00:07:15.942 12:19:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 230551 00:07:15.942 12:19:49 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:15.942 12:19:49 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.942 12:19:49 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 230551 00:07:15.942 12:19:49 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:15.942 12:19:49 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:15.942 12:19:49 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 230551' 00:07:15.942 killing process with pid 230551 00:07:15.942 12:19:49 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 230551 00:07:15.942 12:19:49 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 230551 00:07:16.526 12:19:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:16.526 12:19:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:16.526 12:19:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 230271 ]] 00:07:16.526 12:19:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 230271 00:07:16.526 12:19:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 230271 ']' 00:07:16.526 12:19:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 230271 00:07:16.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (230271) - No such process 00:07:16.526 12:19:49 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 230271 is not found' 00:07:16.526 Process with pid 230271 is not found 00:07:16.526 12:19:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 230551 ]] 00:07:16.526 12:19:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 230551 00:07:16.526 12:19:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 230551 ']' 00:07:16.526 12:19:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 230551 00:07:16.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (230551) - No such process 00:07:16.526 12:19:49 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 230551 is not found' 00:07:16.526 Process with pid 230551 is not found 00:07:16.526 12:19:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:16.526 00:07:16.526 real 0m16.426s 00:07:16.526 user 0m29.321s 00:07:16.526 sys 0m4.886s 00:07:16.526 12:19:49 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.526 12:19:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.526 ************************************ 00:07:16.526 END TEST cpu_locks 00:07:16.526 ************************************ 00:07:16.526 12:19:49 event -- common/autotest_common.sh@1142 -- # return 0 00:07:16.526 00:07:16.526 real 0m42.931s 00:07:16.526 user 1m24.907s 00:07:16.526 sys 0m8.365s 00:07:16.526 12:19:49 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.526 12:19:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.526 ************************************ 00:07:16.526 END TEST event 00:07:16.526 ************************************ 00:07:16.526 12:19:49 -- common/autotest_common.sh@1142 -- # return 0 00:07:16.526 12:19:49 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:16.526 12:19:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.526 12:19:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.526 12:19:49 -- common/autotest_common.sh@10 -- # set +x 00:07:16.526 ************************************ 00:07:16.526 START TEST thread 00:07:16.526 ************************************ 00:07:16.526 12:19:49 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:16.526 * Looking for test storage... 00:07:16.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:16.526 12:19:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:16.526 12:19:49 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:16.526 12:19:49 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.526 12:19:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.526 ************************************ 00:07:16.526 START TEST thread_poller_perf 00:07:16.526 ************************************ 00:07:16.526 12:19:49 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:16.844 [2024-07-25 12:19:49.953439] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:16.844 [2024-07-25 12:19:49.953566] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230977 ] 00:07:16.844 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.844 [2024-07-25 12:19:50.045732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.844 [2024-07-25 12:19:50.122152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.844 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:17.806 ====================================== 00:07:17.806 busy:2607857236 (cyc) 00:07:17.806 total_run_count: 311000 00:07:17.806 tsc_hz: 2600000000 (cyc) 00:07:17.806 ====================================== 00:07:17.806 poller_cost: 8385 (cyc), 3225 (nsec) 00:07:17.806 00:07:17.806 real 0m1.250s 00:07:17.806 user 0m1.152s 00:07:17.806 sys 0m0.093s 00:07:17.806 12:19:51 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.806 12:19:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:17.806 ************************************ 00:07:17.806 END TEST thread_poller_perf 00:07:17.806 ************************************ 00:07:17.806 12:19:51 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:17.807 12:19:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:17.807 12:19:51 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:17.807 12:19:51 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.807 12:19:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.066 ************************************ 00:07:18.066 START TEST thread_poller_perf 00:07:18.066 ************************************ 00:07:18.066 12:19:51 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:18.066 [2024-07-25 12:19:51.281294] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:18.066 [2024-07-25 12:19:51.281388] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231290 ] 00:07:18.066 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.066 [2024-07-25 12:19:51.367975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.066 [2024-07-25 12:19:51.432521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.066 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:19.449 ====================================== 00:07:19.449 busy:2602101334 (cyc) 00:07:19.449 total_run_count: 4119000 00:07:19.449 tsc_hz: 2600000000 (cyc) 00:07:19.449 ====================================== 00:07:19.449 poller_cost: 631 (cyc), 242 (nsec) 00:07:19.449 00:07:19.449 real 0m1.226s 00:07:19.449 user 0m1.139s 00:07:19.449 sys 0m0.083s 00:07:19.449 12:19:52 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.449 12:19:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:19.449 ************************************ 00:07:19.449 END TEST thread_poller_perf 00:07:19.449 ************************************ 00:07:19.449 12:19:52 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:19.449 12:19:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:19.449 00:07:19.449 real 0m2.727s 00:07:19.449 user 0m2.394s 00:07:19.449 sys 0m0.341s 00:07:19.449 12:19:52 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.449 12:19:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.449 ************************************ 00:07:19.449 END TEST thread 00:07:19.449 ************************************ 00:07:19.449 12:19:52 -- common/autotest_common.sh@1142 -- # return 0 00:07:19.449 12:19:52 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:19.449 12:19:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.449 12:19:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.449 12:19:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.449 ************************************ 00:07:19.449 START TEST accel 00:07:19.449 ************************************ 00:07:19.449 12:19:52 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:19.449 * Looking for test storage... 00:07:19.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:19.449 12:19:52 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:19.449 12:19:52 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:19.449 12:19:52 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:19.449 12:19:52 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=231645 00:07:19.449 12:19:52 accel -- accel/accel.sh@63 -- # waitforlisten 231645 00:07:19.449 12:19:52 accel -- common/autotest_common.sh@829 -- # '[' -z 231645 ']' 00:07:19.449 12:19:52 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.449 12:19:52 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.449 12:19:52 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.449 12:19:52 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.449 12:19:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.449 12:19:52 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:19.449 12:19:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:19.449 12:19:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.449 12:19:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.449 12:19:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.449 12:19:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.449 12:19:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.449 12:19:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:19.449 12:19:52 accel -- accel/accel.sh@41 -- # jq -r . 00:07:19.449 [2024-07-25 12:19:52.752973] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:19.449 [2024-07-25 12:19:52.753028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231645 ] 00:07:19.449 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.449 [2024-07-25 12:19:52.833738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.709 [2024-07-25 12:19:52.896810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.648 12:19:53 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.649 12:19:53 accel -- common/autotest_common.sh@862 -- # return 0 00:07:20.649 12:19:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:20.649 12:19:53 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:20.649 12:19:53 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:20.649 12:19:53 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:20.649 12:19:53 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:20.649 12:19:53 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:20.649 12:19:53 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:20.649 12:19:53 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.649 12:19:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.649 12:19:53 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.649 12:19:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.649 12:19:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.649 12:19:53 accel -- accel/accel.sh@75 -- # killprocess 231645 00:07:20.649 12:19:53 accel -- common/autotest_common.sh@948 -- # '[' -z 231645 ']' 00:07:20.649 12:19:53 accel -- common/autotest_common.sh@952 -- # kill -0 231645 00:07:20.649 12:19:53 accel -- common/autotest_common.sh@953 -- # uname 00:07:20.649 12:19:53 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.649 12:19:53 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 231645 00:07:20.649 12:19:54 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.649 12:19:54 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.649 12:19:54 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 231645' 00:07:20.649 killing process with pid 231645 00:07:20.649 12:19:54 accel -- common/autotest_common.sh@967 -- # kill 231645 00:07:20.649 12:19:54 accel -- common/autotest_common.sh@972 -- # wait 231645 00:07:20.909 12:19:54 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:20.909 12:19:54 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:20.909 12:19:54 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:20.909 12:19:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.909 12:19:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.909 12:19:54 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:20.909 12:19:54 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:20.909 12:19:54 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:20.909 12:19:54 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.909 12:19:54 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.909 12:19:54 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.909 12:19:54 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.909 12:19:54 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.909 12:19:54 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:20.909 12:19:54 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:20.909 12:19:54 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.909 12:19:54 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:21.171 12:19:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.171 12:19:54 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:21.171 12:19:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.171 12:19:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.171 12:19:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.171 ************************************ 00:07:21.171 START TEST accel_missing_filename 00:07:21.171 ************************************ 00:07:21.171 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:21.171 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:21.171 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:21.171 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:21.171 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.171 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:21.171 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.171 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:21.171 12:19:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:21.171 12:19:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:21.171 12:19:54 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.171 12:19:54 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.171 12:19:54 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.171 12:19:54 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.171 12:19:54 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.171 12:19:54 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:21.171 12:19:54 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:21.171 [2024-07-25 12:19:54.411579] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:21.171 [2024-07-25 12:19:54.411688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231986 ] 00:07:21.171 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.171 [2024-07-25 12:19:54.478396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.171 [2024-07-25 12:19:54.545172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.171 [2024-07-25 12:19:54.576067] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.432 [2024-07-25 12:19:54.612012] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:21.432 A filename is required. 00:07:21.432 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:21.432 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.432 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:21.432 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:21.432 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:21.432 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.432 00:07:21.432 real 0m0.283s 00:07:21.432 user 0m0.210s 00:07:21.432 sys 0m0.115s 00:07:21.432 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.432 12:19:54 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:21.432 ************************************ 00:07:21.432 END TEST accel_missing_filename 00:07:21.432 ************************************ 00:07:21.432 12:19:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.432 12:19:54 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.432 12:19:54 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:21.432 12:19:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.432 12:19:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.432 ************************************ 00:07:21.432 START TEST accel_compress_verify 00:07:21.432 ************************************ 00:07:21.432 12:19:54 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.432 12:19:54 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:21.432 12:19:54 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.432 12:19:54 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:21.432 12:19:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.432 12:19:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:21.432 12:19:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.432 12:19:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.432 12:19:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.432 12:19:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:21.432 12:19:54 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.432 12:19:54 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.432 12:19:54 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.432 12:19:54 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.432 12:19:54 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.432 12:19:54 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:21.432 12:19:54 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:21.432 [2024-07-25 12:19:54.764474] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:21.432 [2024-07-25 12:19:54.764579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232015 ] 00:07:21.432 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.692 [2024-07-25 12:19:54.851665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.692 [2024-07-25 12:19:54.926570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.692 [2024-07-25 12:19:54.958669] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.692 [2024-07-25 12:19:54.995376] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:21.692 00:07:21.692 Compression does not support the verify option, aborting. 00:07:21.692 12:19:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:21.692 12:19:55 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.692 12:19:55 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:21.692 12:19:55 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:21.692 12:19:55 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:21.692 12:19:55 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.692 00:07:21.692 real 0m0.309s 00:07:21.692 user 0m0.249s 00:07:21.692 sys 0m0.130s 00:07:21.692 12:19:55 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.692 12:19:55 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:21.692 ************************************ 00:07:21.692 END TEST accel_compress_verify 00:07:21.692 ************************************ 00:07:21.692 12:19:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.692 12:19:55 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:21.692 12:19:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.692 12:19:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.692 12:19:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.953 ************************************ 00:07:21.953 START TEST accel_wrong_workload 00:07:21.953 ************************************ 00:07:21.953 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:21.953 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:21.953 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:21.953 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:21.953 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.953 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:21.953 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.953 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:21.953 12:19:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:21.953 12:19:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:21.954 12:19:55 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.954 12:19:55 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.954 12:19:55 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.954 12:19:55 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.954 12:19:55 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.954 12:19:55 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:21.954 12:19:55 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:21.954 Unsupported workload type: foobar 00:07:21.954 [2024-07-25 12:19:55.143891] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:21.954 accel_perf options: 00:07:21.954 [-h help message] 00:07:21.954 [-q queue depth per core] 00:07:21.954 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:21.954 [-T number of threads per core 00:07:21.954 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:21.954 [-t time in seconds] 00:07:21.954 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:21.954 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:21.954 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:21.954 [-l for compress/decompress workloads, name of uncompressed input file 00:07:21.954 [-S for crc32c workload, use this seed value (default 0) 00:07:21.954 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:21.954 [-f for fill workload, use this BYTE value (default 255) 00:07:21.954 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:21.954 [-y verify result if this switch is on] 00:07:21.954 [-a tasks to allocate per core (default: same value as -q)] 00:07:21.954 Can be used to spread operations across a wider range of memory. 00:07:21.954 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:21.954 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.954 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.954 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.954 00:07:21.954 real 0m0.035s 00:07:21.954 user 0m0.024s 00:07:21.954 sys 0m0.011s 00:07:21.954 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.954 12:19:55 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:21.954 ************************************ 00:07:21.954 END TEST accel_wrong_workload 00:07:21.954 ************************************ 00:07:21.954 Error: writing output failed: Broken pipe 00:07:21.954 12:19:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.954 12:19:55 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:21.954 12:19:55 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:21.954 12:19:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.954 12:19:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.954 ************************************ 00:07:21.954 START TEST accel_negative_buffers 00:07:21.954 ************************************ 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:21.954 12:19:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:21.954 12:19:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:21.954 12:19:55 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.954 12:19:55 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.954 12:19:55 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.954 12:19:55 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.954 12:19:55 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.954 12:19:55 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:21.954 12:19:55 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:21.954 -x option must be non-negative. 00:07:21.954 [2024-07-25 12:19:55.255308] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:21.954 accel_perf options: 00:07:21.954 [-h help message] 00:07:21.954 [-q queue depth per core] 00:07:21.954 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:21.954 [-T number of threads per core 00:07:21.954 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:21.954 [-t time in seconds] 00:07:21.954 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:21.954 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:21.954 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:21.954 [-l for compress/decompress workloads, name of uncompressed input file 00:07:21.954 [-S for crc32c workload, use this seed value (default 0) 00:07:21.954 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:21.954 [-f for fill workload, use this BYTE value (default 255) 00:07:21.954 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:21.954 [-y verify result if this switch is on] 00:07:21.954 [-a tasks to allocate per core (default: same value as -q)] 00:07:21.954 Can be used to spread operations across a wider range of memory. 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.954 00:07:21.954 real 0m0.035s 00:07:21.954 user 0m0.021s 00:07:21.954 sys 0m0.014s 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.954 12:19:55 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:21.954 ************************************ 00:07:21.954 END TEST accel_negative_buffers 00:07:21.954 ************************************ 00:07:21.954 Error: writing output failed: Broken pipe 00:07:21.954 12:19:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.954 12:19:55 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:21.954 12:19:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:21.954 12:19:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.954 12:19:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.954 ************************************ 00:07:21.954 START TEST accel_crc32c 00:07:21.954 ************************************ 00:07:21.954 12:19:55 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:21.954 12:19:55 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:21.954 [2024-07-25 12:19:55.352287] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:21.955 [2024-07-25 12:19:55.352380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232084 ] 00:07:22.216 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.216 [2024-07-25 12:19:55.436298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.216 [2024-07-25 12:19:55.500711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.216 12:19:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:23.597 12:19:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.597 00:07:23.597 real 0m1.298s 00:07:23.597 user 0m1.180s 00:07:23.597 sys 0m0.122s 00:07:23.597 12:19:56 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.597 12:19:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:23.597 ************************************ 00:07:23.597 END TEST accel_crc32c 00:07:23.597 ************************************ 00:07:23.597 12:19:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.597 12:19:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:23.597 12:19:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:23.597 12:19:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.597 12:19:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.597 ************************************ 00:07:23.597 START TEST accel_crc32c_C2 00:07:23.597 ************************************ 00:07:23.597 12:19:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:23.597 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.597 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:23.597 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.597 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.597 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:23.598 [2024-07-25 12:19:56.723415] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:23.598 [2024-07-25 12:19:56.723523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232401 ] 00:07:23.598 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.598 [2024-07-25 12:19:56.816160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.598 [2024-07-25 12:19:56.882486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.598 12:19:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.978 00:07:24.978 real 0m1.310s 00:07:24.978 user 0m1.193s 00:07:24.978 sys 0m0.121s 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.978 12:19:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:24.978 ************************************ 00:07:24.978 END TEST accel_crc32c_C2 00:07:24.978 ************************************ 00:07:24.978 12:19:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.978 12:19:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:24.978 12:19:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:24.978 12:19:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.978 12:19:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.978 ************************************ 00:07:24.978 START TEST accel_copy 00:07:24.978 ************************************ 00:07:24.978 12:19:58 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:24.978 12:19:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:24.978 12:19:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:24.978 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.978 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.978 12:19:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:24.978 12:19:58 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:24.978 12:19:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:24.978 12:19:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.978 12:19:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.978 12:19:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.978 12:19:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:24.979 [2024-07-25 12:19:58.104036] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:24.979 [2024-07-25 12:19:58.104098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232716 ] 00:07:24.979 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.979 [2024-07-25 12:19:58.190202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.979 [2024-07-25 12:19:58.258533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.979 12:19:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.358 12:19:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:26.359 12:19:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.359 00:07:26.359 real 0m1.303s 00:07:26.359 user 0m1.183s 00:07:26.359 sys 0m0.123s 00:07:26.359 12:19:59 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.359 12:19:59 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:26.359 ************************************ 00:07:26.359 END TEST accel_copy 00:07:26.359 ************************************ 00:07:26.359 12:19:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.359 12:19:59 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:26.359 12:19:59 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:26.359 12:19:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.359 12:19:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.359 ************************************ 00:07:26.359 START TEST accel_fill 00:07:26.359 ************************************ 00:07:26.359 12:19:59 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:26.359 [2024-07-25 12:19:59.481099] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:26.359 [2024-07-25 12:19:59.481169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232775 ] 00:07:26.359 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.359 [2024-07-25 12:19:59.567029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.359 [2024-07-25 12:19:59.637464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.359 12:19:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:27.740 12:20:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.740 00:07:27.740 real 0m1.307s 00:07:27.740 user 0m1.183s 00:07:27.740 sys 0m0.127s 00:07:27.740 12:20:00 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.740 12:20:00 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:27.740 ************************************ 00:07:27.740 END TEST accel_fill 00:07:27.740 ************************************ 00:07:27.740 12:20:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.740 12:20:00 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:27.740 12:20:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:27.740 12:20:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.740 12:20:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.740 ************************************ 00:07:27.740 START TEST accel_copy_crc32c 00:07:27.740 ************************************ 00:07:27.740 12:20:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:27.740 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:27.740 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:27.740 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.740 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.740 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:27.740 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:27.740 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:27.740 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.740 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.741 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.741 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.741 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.741 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:27.741 12:20:00 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:27.741 [2024-07-25 12:20:00.863016] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:27.741 [2024-07-25 12:20:00.863078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233076 ] 00:07:27.741 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.741 [2024-07-25 12:20:00.947076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.741 [2024-07-25 12:20:01.011721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.741 12:20:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.119 00:07:29.119 real 0m1.301s 00:07:29.119 user 0m1.190s 00:07:29.119 sys 0m0.124s 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.119 12:20:02 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:29.119 ************************************ 00:07:29.119 END TEST accel_copy_crc32c 00:07:29.119 ************************************ 00:07:29.119 12:20:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.119 12:20:02 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:29.119 12:20:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:29.119 12:20:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.119 12:20:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.119 ************************************ 00:07:29.119 START TEST accel_copy_crc32c_C2 00:07:29.119 ************************************ 00:07:29.119 12:20:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:29.119 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.119 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:29.119 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:29.120 [2024-07-25 12:20:02.240618] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:29.120 [2024-07-25 12:20:02.240681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233393 ] 00:07:29.120 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.120 [2024-07-25 12:20:02.324325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.120 [2024-07-25 12:20:02.390363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.120 12:20:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.500 00:07:30.500 real 0m1.302s 00:07:30.500 user 0m1.192s 00:07:30.500 sys 0m0.122s 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.500 12:20:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:30.500 ************************************ 00:07:30.500 END TEST accel_copy_crc32c_C2 00:07:30.500 ************************************ 00:07:30.500 12:20:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.500 12:20:03 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:30.500 12:20:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:30.500 12:20:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.500 12:20:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.500 ************************************ 00:07:30.500 START TEST accel_dualcast 00:07:30.500 ************************************ 00:07:30.500 12:20:03 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:30.500 12:20:03 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:30.500 12:20:03 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:30.500 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.500 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.500 12:20:03 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:30.500 12:20:03 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:30.500 12:20:03 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:30.500 12:20:03 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:30.501 [2024-07-25 12:20:03.622501] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:30.501 [2024-07-25 12:20:03.622583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233674 ] 00:07:30.501 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.501 [2024-07-25 12:20:03.709385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.501 [2024-07-25 12:20:03.784851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.501 12:20:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:31.881 12:20:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.881 00:07:31.881 real 0m1.317s 00:07:31.881 user 0m1.205s 00:07:31.881 sys 0m0.122s 00:07:31.881 12:20:04 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.881 12:20:04 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:31.881 ************************************ 00:07:31.881 END TEST accel_dualcast 00:07:31.881 ************************************ 00:07:31.881 12:20:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.881 12:20:04 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:31.881 12:20:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:31.881 12:20:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.881 12:20:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.881 ************************************ 00:07:31.881 START TEST accel_compare 00:07:31.881 ************************************ 00:07:31.881 12:20:04 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:31.881 12:20:04 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:31.881 [2024-07-25 12:20:05.017969] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:31.881 [2024-07-25 12:20:05.018064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233775 ] 00:07:31.881 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.881 [2024-07-25 12:20:05.104356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.881 [2024-07-25 12:20:05.181369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.881 12:20:05 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.882 12:20:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:33.291 12:20:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.291 00:07:33.291 real 0m1.318s 00:07:33.291 user 0m1.203s 00:07:33.291 sys 0m0.125s 00:07:33.291 12:20:06 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.291 12:20:06 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:33.291 ************************************ 00:07:33.291 END TEST accel_compare 00:07:33.291 ************************************ 00:07:33.291 12:20:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.291 12:20:06 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:33.291 12:20:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:33.291 12:20:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.291 12:20:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.291 ************************************ 00:07:33.291 START TEST accel_xor 00:07:33.291 ************************************ 00:07:33.291 12:20:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:33.291 [2024-07-25 12:20:06.417663] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:33.291 [2024-07-25 12:20:06.417781] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234068 ] 00:07:33.291 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.291 [2024-07-25 12:20:06.553696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.291 [2024-07-25 12:20:06.629070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.291 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.292 12:20:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.673 00:07:34.673 real 0m1.371s 00:07:34.673 user 0m1.221s 00:07:34.673 sys 0m0.160s 00:07:34.673 12:20:07 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.673 12:20:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:34.673 ************************************ 00:07:34.673 END TEST accel_xor 00:07:34.673 ************************************ 00:07:34.673 12:20:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.673 12:20:07 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:34.673 12:20:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:34.673 12:20:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.673 12:20:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.673 ************************************ 00:07:34.673 START TEST accel_xor 00:07:34.673 ************************************ 00:07:34.673 12:20:07 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:34.673 12:20:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:34.673 [2024-07-25 12:20:07.859261] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:34.673 [2024-07-25 12:20:07.859324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234389 ] 00:07:34.673 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.673 [2024-07-25 12:20:07.944127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.673 [2024-07-25 12:20:08.012284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.673 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.673 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.674 12:20:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:36.056 12:20:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.056 00:07:36.056 real 0m1.305s 00:07:36.056 user 0m1.197s 00:07:36.056 sys 0m0.119s 00:07:36.056 12:20:09 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.056 12:20:09 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:36.056 ************************************ 00:07:36.056 END TEST accel_xor 00:07:36.056 ************************************ 00:07:36.056 12:20:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.056 12:20:09 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:36.056 12:20:09 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:36.056 12:20:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.056 12:20:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.056 ************************************ 00:07:36.056 START TEST accel_dif_verify 00:07:36.056 ************************************ 00:07:36.056 12:20:09 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:36.056 [2024-07-25 12:20:09.244380] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:36.056 [2024-07-25 12:20:09.244448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234706 ] 00:07:36.056 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.056 [2024-07-25 12:20:09.328807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.056 [2024-07-25 12:20:09.395070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.056 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.057 12:20:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:37.439 12:20:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.439 00:07:37.439 real 0m1.306s 00:07:37.439 user 0m1.200s 00:07:37.439 sys 0m0.119s 00:07:37.439 12:20:10 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.439 12:20:10 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:37.439 ************************************ 00:07:37.439 END TEST accel_dif_verify 00:07:37.439 ************************************ 00:07:37.439 12:20:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.439 12:20:10 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:37.439 12:20:10 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:37.439 12:20:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.439 12:20:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.439 ************************************ 00:07:37.439 START TEST accel_dif_generate 00:07:37.439 ************************************ 00:07:37.439 12:20:10 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:37.439 [2024-07-25 12:20:10.627792] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:37.439 [2024-07-25 12:20:10.627871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234852 ] 00:07:37.439 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.439 [2024-07-25 12:20:10.714363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.439 [2024-07-25 12:20:10.792573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.439 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 12:20:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:38.823 12:20:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.823 00:07:38.823 real 0m1.320s 00:07:38.823 user 0m1.203s 00:07:38.823 sys 0m0.130s 00:07:38.823 12:20:11 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.823 12:20:11 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:38.823 ************************************ 00:07:38.823 END TEST accel_dif_generate 00:07:38.823 ************************************ 00:07:38.823 12:20:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.823 12:20:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:38.823 12:20:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:38.823 12:20:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.823 12:20:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.823 ************************************ 00:07:38.823 START TEST accel_dif_generate_copy 00:07:38.823 ************************************ 00:07:38.823 12:20:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:38.823 12:20:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:38.823 12:20:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:38.823 12:20:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.823 12:20:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.823 12:20:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:38.823 12:20:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:38.823 12:20:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:38.823 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.823 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.823 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.823 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.823 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.823 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:38.823 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:38.823 [2024-07-25 12:20:12.029790] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:38.823 [2024-07-25 12:20:12.029910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235071 ] 00:07:38.823 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.823 [2024-07-25 12:20:12.166972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.823 [2024-07-25 12:20:12.241778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.084 12:20:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.064 00:07:40.064 real 0m1.372s 00:07:40.064 user 0m1.215s 00:07:40.064 sys 0m0.168s 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.064 12:20:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:40.064 ************************************ 00:07:40.064 END TEST accel_dif_generate_copy 00:07:40.064 ************************************ 00:07:40.064 12:20:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.064 12:20:13 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:40.064 12:20:13 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.064 12:20:13 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:40.064 12:20:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.064 12:20:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.064 ************************************ 00:07:40.064 START TEST accel_comp 00:07:40.064 ************************************ 00:07:40.064 12:20:13 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:40.064 12:20:13 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:40.377 [2024-07-25 12:20:13.479358] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:40.377 [2024-07-25 12:20:13.479476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235381 ] 00:07:40.377 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.377 [2024-07-25 12:20:13.612741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.377 [2024-07-25 12:20:13.690126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.378 12:20:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:41.761 12:20:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.761 00:07:41.761 real 0m1.374s 00:07:41.761 user 0m1.226s 00:07:41.761 sys 0m0.159s 00:07:41.761 12:20:14 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.761 12:20:14 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:41.761 ************************************ 00:07:41.761 END TEST accel_comp 00:07:41.761 ************************************ 00:07:41.761 12:20:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.761 12:20:14 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:41.761 12:20:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:41.761 12:20:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.761 12:20:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.761 ************************************ 00:07:41.761 START TEST accel_decomp 00:07:41.761 ************************************ 00:07:41.761 12:20:14 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:41.761 12:20:14 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:41.761 [2024-07-25 12:20:14.926008] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:41.761 [2024-07-25 12:20:14.926106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235711 ] 00:07:41.761 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.761 [2024-07-25 12:20:15.016968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.761 [2024-07-25 12:20:15.085302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.761 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.762 12:20:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.143 12:20:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.143 00:07:43.143 real 0m1.316s 00:07:43.143 user 0m1.206s 00:07:43.143 sys 0m0.122s 00:07:43.143 12:20:16 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.143 12:20:16 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:43.143 ************************************ 00:07:43.143 END TEST accel_decomp 00:07:43.143 ************************************ 00:07:43.143 12:20:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.143 12:20:16 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.143 12:20:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:43.143 12:20:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.143 12:20:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.143 ************************************ 00:07:43.143 START TEST accel_decomp_full 00:07:43.143 ************************************ 00:07:43.143 12:20:16 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:43.143 12:20:16 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:43.143 [2024-07-25 12:20:16.318782] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:43.143 [2024-07-25 12:20:16.318843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235969 ] 00:07:43.143 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.143 [2024-07-25 12:20:16.403498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.144 [2024-07-25 12:20:16.472591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.144 12:20:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.527 12:20:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.527 00:07:44.527 real 0m1.320s 00:07:44.527 user 0m1.213s 00:07:44.527 sys 0m0.120s 00:07:44.527 12:20:17 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.527 12:20:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:44.527 ************************************ 00:07:44.527 END TEST accel_decomp_full 00:07:44.527 ************************************ 00:07:44.527 12:20:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.527 12:20:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.527 12:20:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:44.527 12:20:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.527 12:20:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.527 ************************************ 00:07:44.527 START TEST accel_decomp_mcore 00:07:44.527 ************************************ 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:44.527 [2024-07-25 12:20:17.715222] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:44.527 [2024-07-25 12:20:17.715280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236083 ] 00:07:44.527 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.527 [2024-07-25 12:20:17.799737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.527 [2024-07-25 12:20:17.872623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.527 [2024-07-25 12:20:17.872820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.527 [2024-07-25 12:20:17.872964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.527 [2024-07-25 12:20:17.872965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.527 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.528 12:20:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.910 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.911 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.911 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:45.911 12:20:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.911 00:07:45.911 real 0m1.328s 00:07:45.911 user 0m4.463s 00:07:45.911 sys 0m0.125s 00:07:45.911 12:20:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.911 12:20:19 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:45.911 ************************************ 00:07:45.911 END TEST accel_decomp_mcore 00:07:45.911 ************************************ 00:07:45.911 12:20:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.911 12:20:19 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.911 12:20:19 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:45.911 12:20:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.911 12:20:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.911 ************************************ 00:07:45.911 START TEST accel_decomp_full_mcore 00:07:45.911 ************************************ 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:45.911 [2024-07-25 12:20:19.120541] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:45.911 [2024-07-25 12:20:19.120627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236388 ] 00:07:45.911 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.911 [2024-07-25 12:20:19.204927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.911 [2024-07-25 12:20:19.280804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.911 [2024-07-25 12:20:19.280955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.911 [2024-07-25 12:20:19.281066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.911 [2024-07-25 12:20:19.281067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.911 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.171 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.171 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.171 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.171 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.171 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.171 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.171 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.171 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.171 12:20:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.111 00:07:47.111 real 0m1.375s 00:07:47.111 user 0m4.625s 00:07:47.111 sys 0m0.138s 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.111 12:20:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:47.111 ************************************ 00:07:47.111 END TEST accel_decomp_full_mcore 00:07:47.111 ************************************ 00:07:47.111 12:20:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.111 12:20:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:47.111 12:20:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:47.111 12:20:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.111 12:20:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.372 ************************************ 00:07:47.372 START TEST accel_decomp_mthread 00:07:47.372 ************************************ 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:47.372 [2024-07-25 12:20:20.574422] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:47.372 [2024-07-25 12:20:20.574487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236713 ] 00:07:47.372 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.372 [2024-07-25 12:20:20.659519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.372 [2024-07-25 12:20:20.733909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.372 12:20:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.753 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.753 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.753 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.753 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.753 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.753 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.753 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.754 00:07:48.754 real 0m1.319s 00:07:48.754 user 0m1.207s 00:07:48.754 sys 0m0.124s 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.754 12:20:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:48.754 ************************************ 00:07:48.754 END TEST accel_decomp_mthread 00:07:48.754 ************************************ 00:07:48.754 12:20:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.754 12:20:21 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.754 12:20:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:48.754 12:20:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.754 12:20:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.754 ************************************ 00:07:48.754 START TEST accel_decomp_full_mthread 00:07:48.754 ************************************ 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:48.754 12:20:21 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:48.754 [2024-07-25 12:20:21.972271] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:48.754 [2024-07-25 12:20:21.972367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237035 ] 00:07:48.754 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.754 [2024-07-25 12:20:22.069762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.754 [2024-07-25 12:20:22.143197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.014 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.015 12:20:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.953 00:07:49.953 real 0m1.356s 00:07:49.953 user 0m1.239s 00:07:49.953 sys 0m0.130s 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.953 12:20:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:49.953 ************************************ 00:07:49.953 END TEST accel_decomp_full_mthread 00:07:49.953 ************************************ 00:07:49.954 12:20:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.954 12:20:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:49.954 12:20:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:49.954 12:20:23 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:49.954 12:20:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:49.954 12:20:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.954 12:20:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.954 12:20:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.954 12:20:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.954 12:20:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.954 12:20:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.954 12:20:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.954 12:20:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:49.954 12:20:23 accel -- accel/accel.sh@41 -- # jq -r . 00:07:50.214 ************************************ 00:07:50.214 START TEST accel_dif_functional_tests 00:07:50.214 ************************************ 00:07:50.214 12:20:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:50.214 [2024-07-25 12:20:23.433655] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:50.214 [2024-07-25 12:20:23.433720] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237204 ] 00:07:50.214 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.214 [2024-07-25 12:20:23.519374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.214 [2024-07-25 12:20:23.599496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.214 [2024-07-25 12:20:23.599622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.214 [2024-07-25 12:20:23.599797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.474 00:07:50.474 00:07:50.474 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.474 http://cunit.sourceforge.net/ 00:07:50.474 00:07:50.474 00:07:50.474 Suite: accel_dif 00:07:50.474 Test: verify: DIF generated, GUARD check ...passed 00:07:50.474 Test: verify: DIF generated, APPTAG check ...passed 00:07:50.474 Test: verify: DIF generated, REFTAG check ...passed 00:07:50.474 Test: verify: DIF not generated, GUARD check ...[2024-07-25 12:20:23.655903] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:50.474 passed 00:07:50.474 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 12:20:23.655955] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:50.474 passed 00:07:50.474 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 12:20:23.655986] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:50.474 passed 00:07:50.474 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:50.474 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 12:20:23.656043] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:50.474 passed 00:07:50.474 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:50.474 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:50.474 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:50.474 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 12:20:23.656170] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:50.474 passed 00:07:50.474 Test: verify copy: DIF generated, GUARD check ...passed 00:07:50.474 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:50.474 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:50.474 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 12:20:23.656314] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:50.474 passed 00:07:50.474 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 12:20:23.656342] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:50.474 passed 00:07:50.474 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 12:20:23.656368] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:50.474 passed 00:07:50.474 Test: generate copy: DIF generated, GUARD check ...passed 00:07:50.474 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:50.474 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:50.474 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:50.474 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:50.474 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:50.474 Test: generate copy: iovecs-len validate ...[2024-07-25 12:20:23.656596] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:50.474 passed 00:07:50.474 Test: generate copy: buffer alignment validate ...passed 00:07:50.474 00:07:50.474 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.474 suites 1 1 n/a 0 0 00:07:50.474 tests 26 26 26 0 0 00:07:50.474 asserts 115 115 115 0 n/a 00:07:50.474 00:07:50.474 Elapsed time = 0.002 seconds 00:07:50.474 00:07:50.474 real 0m0.391s 00:07:50.474 user 0m0.500s 00:07:50.474 sys 0m0.154s 00:07:50.474 12:20:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.474 12:20:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:50.474 ************************************ 00:07:50.474 END TEST accel_dif_functional_tests 00:07:50.474 ************************************ 00:07:50.474 12:20:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.474 00:07:50.474 real 0m31.205s 00:07:50.474 user 0m34.666s 00:07:50.474 sys 0m4.798s 00:07:50.474 12:20:23 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.474 12:20:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.474 ************************************ 00:07:50.474 END TEST accel 00:07:50.474 ************************************ 00:07:50.474 12:20:23 -- common/autotest_common.sh@1142 -- # return 0 00:07:50.474 12:20:23 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:50.474 12:20:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.474 12:20:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.474 12:20:23 -- common/autotest_common.sh@10 -- # set +x 00:07:50.474 ************************************ 00:07:50.474 START TEST accel_rpc 00:07:50.474 ************************************ 00:07:50.474 12:20:23 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:50.734 * Looking for test storage... 00:07:50.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:50.734 12:20:23 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:50.734 12:20:23 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=237425 00:07:50.734 12:20:23 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 237425 00:07:50.734 12:20:23 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:50.734 12:20:23 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 237425 ']' 00:07:50.734 12:20:23 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.734 12:20:23 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.734 12:20:23 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.734 12:20:23 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.734 12:20:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.734 [2024-07-25 12:20:24.045493] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:50.734 [2024-07-25 12:20:24.045557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237425 ] 00:07:50.734 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.734 [2024-07-25 12:20:24.128664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.994 [2024-07-25 12:20:24.193256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.564 12:20:24 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.564 12:20:24 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:51.564 12:20:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:51.564 12:20:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:51.564 12:20:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:51.564 12:20:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:51.564 12:20:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:51.564 12:20:24 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.564 12:20:24 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.564 12:20:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.564 ************************************ 00:07:51.564 START TEST accel_assign_opcode 00:07:51.564 ************************************ 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.564 [2024-07-25 12:20:24.903271] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.564 [2024-07-25 12:20:24.911286] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.564 12:20:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.824 12:20:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.824 12:20:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:51.824 12:20:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:51.824 12:20:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.824 12:20:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:51.824 12:20:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.824 12:20:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.824 software 00:07:51.824 00:07:51.824 real 0m0.199s 00:07:51.824 user 0m0.046s 00:07:51.824 sys 0m0.013s 00:07:51.824 12:20:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.824 12:20:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.824 ************************************ 00:07:51.824 END TEST accel_assign_opcode 00:07:51.824 ************************************ 00:07:51.824 12:20:25 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:51.824 12:20:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 237425 00:07:51.824 12:20:25 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 237425 ']' 00:07:51.824 12:20:25 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 237425 00:07:51.824 12:20:25 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:51.824 12:20:25 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.824 12:20:25 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 237425 00:07:51.824 12:20:25 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.824 12:20:25 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.824 12:20:25 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 237425' 00:07:51.824 killing process with pid 237425 00:07:51.824 12:20:25 accel_rpc -- common/autotest_common.sh@967 -- # kill 237425 00:07:51.824 12:20:25 accel_rpc -- common/autotest_common.sh@972 -- # wait 237425 00:07:52.084 00:07:52.084 real 0m1.507s 00:07:52.084 user 0m1.638s 00:07:52.084 sys 0m0.418s 00:07:52.084 12:20:25 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.084 12:20:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.084 ************************************ 00:07:52.084 END TEST accel_rpc 00:07:52.084 ************************************ 00:07:52.084 12:20:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:52.084 12:20:25 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:52.084 12:20:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.084 12:20:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.084 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:07:52.084 ************************************ 00:07:52.084 START TEST app_cmdline 00:07:52.084 ************************************ 00:07:52.084 12:20:25 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:52.344 * Looking for test storage... 00:07:52.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:52.344 12:20:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:52.345 12:20:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=237781 00:07:52.345 12:20:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 237781 00:07:52.345 12:20:25 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 237781 ']' 00:07:52.345 12:20:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:52.345 12:20:25 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.345 12:20:25 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.345 12:20:25 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.345 12:20:25 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.345 12:20:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:52.345 [2024-07-25 12:20:25.632290] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:07:52.345 [2024-07-25 12:20:25.632363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237781 ] 00:07:52.345 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.345 [2024-07-25 12:20:25.716300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.605 [2024-07-25 12:20:25.784612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.176 12:20:26 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.176 12:20:26 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:53.176 12:20:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:53.436 { 00:07:53.436 "version": "SPDK v24.09-pre git sha1 8fdaab4b1", 00:07:53.436 "fields": { 00:07:53.436 "major": 24, 00:07:53.436 "minor": 9, 00:07:53.436 "patch": 0, 00:07:53.436 "suffix": "-pre", 00:07:53.436 "commit": "8fdaab4b1" 00:07:53.436 } 00:07:53.436 } 00:07:53.436 12:20:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:53.436 12:20:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:53.436 12:20:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:53.436 12:20:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:53.436 12:20:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:53.436 12:20:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.436 12:20:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.436 12:20:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:53.436 12:20:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:53.436 12:20:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:53.436 12:20:26 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.697 request: 00:07:53.697 { 00:07:53.697 "method": "env_dpdk_get_mem_stats", 00:07:53.697 "req_id": 1 00:07:53.697 } 00:07:53.697 Got JSON-RPC error response 00:07:53.697 response: 00:07:53.697 { 00:07:53.697 "code": -32601, 00:07:53.697 "message": "Method not found" 00:07:53.697 } 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:53.697 12:20:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 237781 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 237781 ']' 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 237781 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 237781 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 237781' 00:07:53.697 killing process with pid 237781 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@967 -- # kill 237781 00:07:53.697 12:20:26 app_cmdline -- common/autotest_common.sh@972 -- # wait 237781 00:07:53.957 00:07:53.957 real 0m1.667s 00:07:53.957 user 0m2.068s 00:07:53.957 sys 0m0.430s 00:07:53.957 12:20:27 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.957 12:20:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.957 ************************************ 00:07:53.957 END TEST app_cmdline 00:07:53.957 ************************************ 00:07:53.957 12:20:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:53.957 12:20:27 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:53.957 12:20:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:53.957 12:20:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.957 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:07:53.957 ************************************ 00:07:53.957 START TEST version 00:07:53.957 ************************************ 00:07:53.957 12:20:27 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:53.957 * Looking for test storage... 00:07:53.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:53.957 12:20:27 version -- app/version.sh@17 -- # get_header_version major 00:07:53.957 12:20:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:53.957 12:20:27 version -- app/version.sh@14 -- # cut -f2 00:07:53.957 12:20:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.957 12:20:27 version -- app/version.sh@17 -- # major=24 00:07:53.957 12:20:27 version -- app/version.sh@18 -- # get_header_version minor 00:07:53.957 12:20:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:53.957 12:20:27 version -- app/version.sh@14 -- # cut -f2 00:07:53.957 12:20:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.957 12:20:27 version -- app/version.sh@18 -- # minor=9 00:07:53.957 12:20:27 version -- app/version.sh@19 -- # get_header_version patch 00:07:53.957 12:20:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:53.958 12:20:27 version -- app/version.sh@14 -- # cut -f2 00:07:53.958 12:20:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.958 12:20:27 version -- app/version.sh@19 -- # patch=0 00:07:53.958 12:20:27 version -- app/version.sh@20 -- # get_header_version suffix 00:07:53.958 12:20:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:53.958 12:20:27 version -- app/version.sh@14 -- # cut -f2 00:07:53.958 12:20:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.958 12:20:27 version -- app/version.sh@20 -- # suffix=-pre 00:07:53.958 12:20:27 version -- app/version.sh@22 -- # version=24.9 00:07:53.958 12:20:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:53.958 12:20:27 version -- app/version.sh@28 -- # version=24.9rc0 00:07:53.958 12:20:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:53.958 12:20:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:54.218 12:20:27 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:54.218 12:20:27 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:54.218 00:07:54.218 real 0m0.189s 00:07:54.218 user 0m0.087s 00:07:54.218 sys 0m0.145s 00:07:54.218 12:20:27 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.218 12:20:27 version -- common/autotest_common.sh@10 -- # set +x 00:07:54.218 ************************************ 00:07:54.218 END TEST version 00:07:54.218 ************************************ 00:07:54.218 12:20:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:54.218 12:20:27 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:54.218 12:20:27 -- spdk/autotest.sh@198 -- # uname -s 00:07:54.218 12:20:27 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:54.218 12:20:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:54.218 12:20:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:54.218 12:20:27 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:54.218 12:20:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:54.218 12:20:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:54.218 12:20:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.218 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.218 12:20:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:54.218 12:20:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:54.218 12:20:27 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:54.218 12:20:27 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:54.218 12:20:27 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:54.218 12:20:27 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:54.218 12:20:27 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:54.218 12:20:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.218 12:20:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.218 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.218 ************************************ 00:07:54.218 START TEST nvmf_tcp 00:07:54.218 ************************************ 00:07:54.218 12:20:27 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:54.218 * Looking for test storage... 00:07:54.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:54.479 12:20:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:54.479 12:20:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:54.479 12:20:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:54.479 12:20:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.479 12:20:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.479 12:20:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.479 ************************************ 00:07:54.479 START TEST nvmf_target_core 00:07:54.479 ************************************ 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:54.479 * Looking for test storage... 00:07:54.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.479 ************************************ 00:07:54.479 START TEST nvmf_abort 00:07:54.479 ************************************ 00:07:54.479 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:54.740 * Looking for test storage... 00:07:54.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.740 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.740 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:54.740 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.740 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.740 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.740 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.740 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.740 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.740 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.740 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.741 12:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.741 12:20:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:54.741 12:20:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:54.741 12:20:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.741 12:20:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.876 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:02.877 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:02.877 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:02.877 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:02.877 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.877 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.138 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.138 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.138 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:03.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:08:03.138 00:08:03.138 --- 10.0.0.2 ping statistics --- 00:08:03.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.138 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:08:03.139 00:08:03.139 --- 10.0.0.1 ping statistics --- 00:08:03.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.139 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=242173 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 242173 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 242173 ']' 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.139 12:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.139 [2024-07-25 12:20:36.455736] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:08:03.139 [2024-07-25 12:20:36.455804] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.139 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.139 [2024-07-25 12:20:36.549611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.400 [2024-07-25 12:20:36.661973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.400 [2024-07-25 12:20:36.662044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.400 [2024-07-25 12:20:36.662055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.400 [2024-07-25 12:20:36.662065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.400 [2024-07-25 12:20:36.662073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.400 [2024-07-25 12:20:36.662179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.400 [2024-07-25 12:20:36.662328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.400 [2024-07-25 12:20:36.662330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.972 [2024-07-25 12:20:37.306337] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.972 Malloc0 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.972 Delay0 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.972 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:04.233 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.233 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:04.233 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.233 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:04.233 [2024-07-25 12:20:37.403152] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.233 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.233 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.233 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.233 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:04.233 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.233 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:04.233 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.233 [2024-07-25 12:20:37.545740] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:06.821 Initializing NVMe Controllers 00:08:06.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:06.821 controller IO queue size 128 less than required 00:08:06.821 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:06.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:06.821 Initialization complete. Launching workers. 00:08:06.821 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37052 00:08:06.821 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37113, failed to submit 62 00:08:06.821 success 37056, unsuccess 57, failed 0 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:06.821 rmmod nvme_tcp 00:08:06.821 rmmod nvme_fabrics 00:08:06.821 rmmod nvme_keyring 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 242173 ']' 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 242173 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 242173 ']' 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 242173 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 242173 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 242173' 00:08:06.821 killing process with pid 242173 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 242173 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 242173 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.821 12:20:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.759 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:08.759 00:08:08.759 real 0m14.204s 00:08:08.759 user 0m13.793s 00:08:08.759 sys 0m7.343s 00:08:08.759 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.759 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:08.759 ************************************ 00:08:08.759 END TEST nvmf_abort 00:08:08.759 ************************************ 00:08:08.759 12:20:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:08.759 12:20:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:08.759 12:20:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.759 12:20:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.759 12:20:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.759 ************************************ 00:08:08.759 START TEST nvmf_ns_hotplug_stress 00:08:08.759 ************************************ 00:08:08.759 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:09.020 * Looking for test storage... 00:08:09.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.020 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.021 12:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:17.163 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:17.163 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:17.163 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:17.163 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:17.163 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.164 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:08:17.425 00:08:17.425 --- 10.0.0.2 ping statistics --- 00:08:17.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.425 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:08:17.425 00:08:17.425 --- 10.0.0.1 ping statistics --- 00:08:17.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.425 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.425 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=247281 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 247281 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 247281 ']' 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.426 12:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.685 [2024-07-25 12:20:50.880723] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:08:17.685 [2024-07-25 12:20:50.880783] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.685 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.686 [2024-07-25 12:20:50.972575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:17.686 [2024-07-25 12:20:51.080970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.686 [2024-07-25 12:20:51.081038] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.686 [2024-07-25 12:20:51.081055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.686 [2024-07-25 12:20:51.081066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.686 [2024-07-25 12:20:51.081077] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.686 [2024-07-25 12:20:51.081280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.686 [2024-07-25 12:20:51.081343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.686 [2024-07-25 12:20:51.081346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.629 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.629 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:18.629 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.629 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.629 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.629 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.629 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:18.629 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:18.629 [2024-07-25 12:20:51.999059] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.890 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:18.890 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.150 [2024-07-25 12:20:52.477059] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.151 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:19.411 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:19.672 Malloc0 00:08:19.672 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:19.931 Delay0 00:08:19.932 12:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.192 12:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:20.192 NULL1 00:08:20.452 12:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:20.452 12:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=247876 00:08:20.452 12:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:20.453 12:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:20.453 12:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.713 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.653 Read completed with error (sct=0, sc=11) 00:08:21.653 12:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.914 12:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:21.914 12:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:22.175 true 00:08:22.175 12:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:22.175 12:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.116 12:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.376 12:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:23.376 12:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:23.376 true 00:08:23.376 12:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:23.376 12:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.637 12:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.898 12:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:23.898 12:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:24.160 true 00:08:24.160 12:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:24.160 12:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.100 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.360 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:25.360 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:25.621 true 00:08:25.621 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:25.621 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.621 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.881 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:25.881 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:26.141 true 00:08:26.141 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:26.141 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.526 12:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.526 12:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:27.526 12:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:27.787 true 00:08:27.787 12:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:27.787 12:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.727 12:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.727 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:28.727 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:28.987 true 00:08:28.987 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:28.987 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.247 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.247 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:29.247 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:29.507 true 00:08:29.507 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:29.507 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.447 12:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.707 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:30.707 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:30.968 true 00:08:30.968 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:30.968 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.228 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.487 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:31.487 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:31.487 true 00:08:31.487 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:31.488 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.871 12:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.871 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:32.871 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:32.871 true 00:08:33.156 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:33.156 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.156 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.451 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:33.451 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:33.713 true 00:08:33.713 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:33.713 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.651 12:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.911 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:34.911 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:34.911 true 00:08:35.171 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:35.171 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.171 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.431 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:35.431 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:35.690 true 00:08:35.690 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:35.690 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.628 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.889 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:36.889 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:37.149 true 00:08:37.149 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:37.149 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.408 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.668 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:37.668 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:37.668 true 00:08:37.668 12:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:37.668 12:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.051 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.051 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:39.051 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:39.312 true 00:08:39.312 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:39.312 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.571 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.571 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:39.571 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:39.831 true 00:08:39.831 12:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:39.831 12:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.214 12:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.214 12:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:41.214 12:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:41.473 true 00:08:41.473 12:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:41.473 12:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.412 12:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.412 12:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:42.412 12:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:42.672 true 00:08:42.672 12:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:42.672 12:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.672 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.931 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:42.931 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:43.192 true 00:08:43.192 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:43.192 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.574 12:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.574 12:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:44.574 12:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:44.834 true 00:08:44.834 12:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:44.834 12:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.774 12:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.774 12:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:45.774 12:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:46.034 true 00:08:46.034 12:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:46.034 12:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.294 12:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.554 12:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:46.554 12:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:46.554 true 00:08:46.554 12:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:46.554 12:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.945 12:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.945 12:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:47.945 12:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:48.205 true 00:08:48.205 12:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:48.205 12:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.145 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.405 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:49.405 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:49.405 true 00:08:49.405 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:49.405 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.666 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.927 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:49.927 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:50.187 true 00:08:50.187 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:50.187 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.128 Initializing NVMe Controllers 00:08:51.128 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:51.128 Controller IO queue size 128, less than required. 00:08:51.128 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:51.128 Controller IO queue size 128, less than required. 00:08:51.128 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:51.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:51.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:51.128 Initialization complete. Launching workers. 00:08:51.128 ======================================================== 00:08:51.128 Latency(us) 00:08:51.128 Device Information : IOPS MiB/s Average min max 00:08:51.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1140.97 0.56 72353.29 2496.82 1020259.50 00:08:51.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4713.77 2.30 27163.50 9993.28 551709.42 00:08:51.128 ======================================================== 00:08:51.128 Total : 5854.73 2.86 35970.05 2496.82 1020259.50 00:08:51.128 00:08:51.128 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.389 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:51.389 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:51.650 true 00:08:51.650 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 247876 00:08:51.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (247876) - No such process 00:08:51.650 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 247876 00:08:51.650 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.912 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:52.172 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:52.172 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:52.172 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:52.172 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:52.172 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:52.172 null0 00:08:52.433 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:52.433 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:52.433 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:52.433 null1 00:08:52.433 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:52.433 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:52.433 12:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:52.693 null2 00:08:52.693 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:52.693 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:52.693 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:52.954 null3 00:08:52.954 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:52.954 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:52.954 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:53.215 null4 00:08:53.215 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:53.215 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:53.215 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:53.475 null5 00:08:53.475 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:53.475 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:53.475 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:53.736 null6 00:08:53.736 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:53.736 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:53.736 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:54.000 null7 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:54.000 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:54.001 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:54.001 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 254067 254068 254070 254072 254074 254076 254078 254080 00:08:54.001 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:54.001 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:54.001 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:54.001 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.001 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:54.316 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:54.316 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:54.316 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.316 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:54.316 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.317 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:54.607 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:54.607 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:54.607 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:54.607 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.607 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:54.607 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:54.607 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:54.607 12:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.869 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:55.131 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:55.131 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:55.131 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:55.131 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.131 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:55.131 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:55.131 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.131 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:55.393 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:55.655 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:55.655 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:55.655 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.655 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:55.655 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.655 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.656 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:55.917 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.917 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:55.917 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:55.917 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.917 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:55.917 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:55.917 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:55.917 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:56.178 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.178 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.178 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:56.178 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.178 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.178 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:56.178 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.179 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:56.440 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:56.440 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.440 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.440 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:56.440 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:56.440 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:56.440 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:56.440 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:56.440 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.440 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.440 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.701 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:56.701 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:56.701 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.962 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:57.224 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.485 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:57.745 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:57.745 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.745 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:58.005 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.006 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:58.006 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:58.006 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.265 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.525 rmmod nvme_tcp 00:08:58.525 rmmod nvme_fabrics 00:08:58.525 rmmod nvme_keyring 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 247281 ']' 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 247281 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 247281 ']' 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 247281 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 247281 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 247281' 00:08:58.525 killing process with pid 247281 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 247281 00:08:58.525 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 247281 00:08:58.785 12:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.785 12:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.785 12:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.786 12:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.786 12:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.786 12:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.786 12:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.786 12:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.336 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:01.336 00:09:01.336 real 0m52.064s 00:09:01.336 user 3m21.679s 00:09:01.336 sys 0m18.068s 00:09:01.336 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.336 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.336 ************************************ 00:09:01.336 END TEST nvmf_ns_hotplug_stress 00:09:01.336 ************************************ 00:09:01.336 12:21:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:01.336 12:21:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:01.336 12:21:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.337 ************************************ 00:09:01.337 START TEST nvmf_delete_subsystem 00:09:01.337 ************************************ 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:01.337 * Looking for test storage... 00:09:01.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.337 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:09.481 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:09.481 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.481 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:09.482 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:09.482 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:09:09.482 00:09:09.482 --- 10.0.0.2 ping statistics --- 00:09:09.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.482 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:09:09.482 00:09:09.482 --- 10.0.0.1 ping statistics --- 00:09:09.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.482 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=259371 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 259371 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 259371 ']' 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.482 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.744 [2024-07-25 12:21:42.906116] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:09:09.744 [2024-07-25 12:21:42.906182] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.744 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.744 [2024-07-25 12:21:42.986983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.744 [2024-07-25 12:21:43.080057] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.744 [2024-07-25 12:21:43.080115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.744 [2024-07-25 12:21:43.080123] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.744 [2024-07-25 12:21:43.080130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.744 [2024-07-25 12:21:43.080135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.744 [2024-07-25 12:21:43.080219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.744 [2024-07-25 12:21:43.080223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.687 [2024-07-25 12:21:43.835724] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.687 [2024-07-25 12:21:43.860304] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.687 NULL1 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.687 Delay0 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=259674 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:10.687 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:10.687 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.687 [2024-07-25 12:21:43.986952] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:12.603 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.603 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.603 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 starting I/O failed: -6 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 starting I/O failed: -6 00:09:12.864 Write completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Write completed with error (sct=0, sc=8) 00:09:12.864 starting I/O failed: -6 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 starting I/O failed: -6 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Write completed with error (sct=0, sc=8) 00:09:12.864 starting I/O failed: -6 00:09:12.864 Write completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Write completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 starting I/O failed: -6 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 starting I/O failed: -6 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 starting I/O failed: -6 00:09:12.864 Write completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Write completed with error (sct=0, sc=8) 00:09:12.864 Write completed with error (sct=0, sc=8) 00:09:12.864 starting I/O failed: -6 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 starting I/O failed: -6 00:09:12.864 Write completed with error (sct=0, sc=8) 00:09:12.864 Read completed with error (sct=0, sc=8) 00:09:12.864 Write completed with error (sct=0, sc=8) 00:09:12.864 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 [2024-07-25 12:21:46.145315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e000 is same with the state(5) to be set 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.865 Write completed with error (sct=0, sc=8) 00:09:12.865 Read completed with error (sct=0, sc=8) 00:09:12.865 starting I/O failed: -6 00:09:12.866 Read completed with error (sct=0, sc=8) 00:09:12.866 Read completed with error (sct=0, sc=8) 00:09:12.866 starting I/O failed: -6 00:09:12.866 Write completed with error (sct=0, sc=8) 00:09:12.866 Read completed with error (sct=0, sc=8) 00:09:12.866 starting I/O failed: -6 00:09:12.866 Read completed with error (sct=0, sc=8) 00:09:12.866 Read completed with error (sct=0, sc=8) 00:09:12.866 starting I/O failed: -6 00:09:12.866 Write completed with error (sct=0, sc=8) 00:09:12.866 Write completed with error (sct=0, sc=8) 00:09:12.866 starting I/O failed: -6 00:09:12.866 Write completed with error (sct=0, sc=8) 00:09:12.866 starting I/O failed: -6 00:09:12.866 starting I/O failed: -6 00:09:12.866 starting I/O failed: -6 00:09:13.810 [2024-07-25 12:21:47.099724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8fac0 is same with the state(5) to be set 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 [2024-07-25 12:21:47.147231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ea40 is same with the state(5) to be set 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Write completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.810 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 [2024-07-25 12:21:47.148358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff24400d000 is same with the state(5) to be set 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 [2024-07-25 12:21:47.148792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e3e0 is same with the state(5) to be set 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Read completed with error (sct=0, sc=8) 00:09:13.811 Write completed with error (sct=0, sc=8) 00:09:13.811 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.811 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:13.811 [2024-07-25 12:21:47.150226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff24400d660 is same with the state(5) to be set 00:09:13.811 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 259674 00:09:13.811 Initializing NVMe Controllers 00:09:13.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:13.811 Controller IO queue size 128, less than required. 00:09:13.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:13.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:13.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:13.811 Initialization complete. Launching workers. 00:09:13.811 ======================================================== 00:09:13.811 Latency(us) 00:09:13.811 Device Information : IOPS MiB/s Average min max 00:09:13.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.37 0.09 883172.63 1321.44 1015407.12 00:09:13.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 188.75 0.09 899071.64 1506.49 1017358.48 00:09:13.811 ======================================================== 00:09:13.811 Total : 365.12 0.18 891391.79 1321.44 1017358.48 00:09:13.811 00:09:13.811 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:13.811 [2024-07-25 12:21:47.150956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8fac0 (9): Bad file descriptor 00:09:13.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 259674 00:09:14.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (259674) - No such process 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 259674 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 259674 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 259674 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.382 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.383 [2024-07-25 12:21:47.678378] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=260293 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 260293 00:09:14.383 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:14.383 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.383 [2024-07-25 12:21:47.772344] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:14.953 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:14.953 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 260293 00:09:14.953 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:15.523 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:15.523 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 260293 00:09:15.523 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.094 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.094 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 260293 00:09:16.094 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.355 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.355 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 260293 00:09:16.355 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.926 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.926 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 260293 00:09:16.926 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:17.497 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:17.497 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 260293 00:09:17.497 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:17.497 Initializing NVMe Controllers 00:09:17.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:17.497 Controller IO queue size 128, less than required. 00:09:17.497 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:17.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:17.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:17.497 Initialization complete. Launching workers. 00:09:17.497 ======================================================== 00:09:17.497 Latency(us) 00:09:17.497 Device Information : IOPS MiB/s Average min max 00:09:17.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003937.09 1000400.12 1013886.13 00:09:17.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005455.49 1000313.04 1015545.18 00:09:17.497 ======================================================== 00:09:17.497 Total : 256.00 0.12 1004696.29 1000313.04 1015545.18 00:09:17.497 00:09:18.067 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:18.067 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 260293 00:09:18.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (260293) - No such process 00:09:18.067 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 260293 00:09:18.067 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.068 rmmod nvme_tcp 00:09:18.068 rmmod nvme_fabrics 00:09:18.068 rmmod nvme_keyring 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 259371 ']' 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 259371 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 259371 ']' 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 259371 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 259371 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 259371' 00:09:18.068 killing process with pid 259371 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 259371 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 259371 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.068 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.611 00:09:20.611 real 0m19.243s 00:09:20.611 user 0m31.416s 00:09:20.611 sys 0m7.156s 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.611 ************************************ 00:09:20.611 END TEST nvmf_delete_subsystem 00:09:20.611 ************************************ 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.611 ************************************ 00:09:20.611 START TEST nvmf_host_management 00:09:20.611 ************************************ 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:20.611 * Looking for test storage... 00:09:20.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:20.611 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.612 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:28.827 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:28.827 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:28.827 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:28.827 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:28.827 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:28.827 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:28.827 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:28.827 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:28.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:09:28.827 00:09:28.827 --- 10.0.0.2 ping statistics --- 00:09:28.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.828 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:28.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:09:28.828 00:09:28.828 --- 10.0.0.1 ping statistics --- 00:09:28.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.828 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=265403 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 265403 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 265403 ']' 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.828 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:28.828 [2024-07-25 12:22:02.170231] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:09:28.828 [2024-07-25 12:22:02.170299] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.828 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.089 [2024-07-25 12:22:02.260182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.089 [2024-07-25 12:22:02.371700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.089 [2024-07-25 12:22:02.371767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.089 [2024-07-25 12:22:02.371785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.089 [2024-07-25 12:22:02.371798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.089 [2024-07-25 12:22:02.371810] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.089 [2024-07-25 12:22:02.371982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.089 [2024-07-25 12:22:02.372134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.089 [2024-07-25 12:22:02.372293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:29.089 [2024-07-25 12:22:02.372297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.661 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.661 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:29.661 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.661 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.661 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.922 [2024-07-25 12:22:03.091927] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:29.922 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.923 Malloc0 00:09:29.923 [2024-07-25 12:22:03.170694] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=265484 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 265484 /var/tmp/bdevperf.sock 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 265484 ']' 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:29.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:29.923 { 00:09:29.923 "params": { 00:09:29.923 "name": "Nvme$subsystem", 00:09:29.923 "trtype": "$TEST_TRANSPORT", 00:09:29.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.923 "adrfam": "ipv4", 00:09:29.923 "trsvcid": "$NVMF_PORT", 00:09:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.923 "hdgst": ${hdgst:-false}, 00:09:29.923 "ddgst": ${ddgst:-false} 00:09:29.923 }, 00:09:29.923 "method": "bdev_nvme_attach_controller" 00:09:29.923 } 00:09:29.923 EOF 00:09:29.923 )") 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:29.923 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:29.923 "params": { 00:09:29.923 "name": "Nvme0", 00:09:29.923 "trtype": "tcp", 00:09:29.923 "traddr": "10.0.0.2", 00:09:29.923 "adrfam": "ipv4", 00:09:29.923 "trsvcid": "4420", 00:09:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:29.923 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:29.923 "hdgst": false, 00:09:29.923 "ddgst": false 00:09:29.923 }, 00:09:29.923 "method": "bdev_nvme_attach_controller" 00:09:29.923 }' 00:09:29.923 [2024-07-25 12:22:03.277189] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:09:29.923 [2024-07-25 12:22:03.277263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265484 ] 00:09:29.923 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.184 [2024-07-25 12:22:03.364511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.184 [2024-07-25 12:22:03.460738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.445 Running I/O for 10 seconds... 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.020 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.020 [2024-07-25 12:22:04.214710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.214826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.214850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.214870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.214888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.214907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.214927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.214946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.214965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.214999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd80 is same with the state(5) to be set 00:09:31.020 [2024-07-25 12:22:04.215538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:31.020 [2024-07-25 12:22:04.215606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.020 [2024-07-25 12:22:04.215618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:31.020 [2024-07-25 12:22:04.215626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.020 [2024-07-25 12:22:04.215634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:31.020 [2024-07-25 12:22:04.215641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.020 [2024-07-25 12:22:04.215650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:31.021 [2024-07-25 12:22:04.215657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7e180 is same with the state(5) to be set 00:09:31.021 [2024-07-25 12:22:04.215750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.215986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.215995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.021 [2024-07-25 12:22:04.216336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.021 [2024-07-25 12:22:04.216344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.022 [2024-07-25 12:22:04.216802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.216881] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeaf710 was disconnected and freed. reset controller. 00:09:31.022 [2024-07-25 12:22:04.217995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:31.022 task offset: 57344 on job bdev=Nvme0n1 fails 00:09:31.022 00:09:31.022 Latency(us) 00:09:31.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.022 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:31.022 Job: Nvme0n1 ended in about 0.43 seconds with error 00:09:31.022 Verification LBA range: start 0x0 length 0x400 00:09:31.022 Nvme0n1 : 0.43 1045.96 65.37 149.42 0.00 52162.06 2029.10 52428.80 00:09:31.022 =================================================================================================================== 00:09:31.022 Total : 1045.96 65.37 149.42 0.00 52162.06 2029.10 52428.80 00:09:31.022 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.022 [2024-07-25 12:22:04.220061] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.022 [2024-07-25 12:22:04.220096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7e180 (9): Bad file descriptor 00:09:31.022 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:31.022 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.022 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.022 [2024-07-25 12:22:04.223976] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:09:31.022 [2024-07-25 12:22:04.224102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:09:31.022 [2024-07-25 12:22:04.224147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.022 [2024-07-25 12:22:04.224163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:09:31.022 [2024-07-25 12:22:04.224173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:09:31.022 [2024-07-25 12:22:04.224181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:09:31.023 [2024-07-25 12:22:04.224190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa7e180 00:09:31.023 [2024-07-25 12:22:04.224214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7e180 (9): Bad file descriptor 00:09:31.023 [2024-07-25 12:22:04.224240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:09:31.023 [2024-07-25 12:22:04.224248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:09:31.023 [2024-07-25 12:22:04.224258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:09:31.023 [2024-07-25 12:22:04.224273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:31.023 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.023 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 265484 00:09:31.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (265484) - No such process 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:31.967 { 00:09:31.967 "params": { 00:09:31.967 "name": "Nvme$subsystem", 00:09:31.967 "trtype": "$TEST_TRANSPORT", 00:09:31.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.967 "adrfam": "ipv4", 00:09:31.967 "trsvcid": "$NVMF_PORT", 00:09:31.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.967 "hdgst": ${hdgst:-false}, 00:09:31.967 "ddgst": ${ddgst:-false} 00:09:31.967 }, 00:09:31.967 "method": "bdev_nvme_attach_controller" 00:09:31.967 } 00:09:31.967 EOF 00:09:31.967 )") 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:31.967 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:31.967 "params": { 00:09:31.967 "name": "Nvme0", 00:09:31.967 "trtype": "tcp", 00:09:31.967 "traddr": "10.0.0.2", 00:09:31.967 "adrfam": "ipv4", 00:09:31.967 "trsvcid": "4420", 00:09:31.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:31.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:31.967 "hdgst": false, 00:09:31.967 "ddgst": false 00:09:31.967 }, 00:09:31.968 "method": "bdev_nvme_attach_controller" 00:09:31.968 }' 00:09:31.968 [2024-07-25 12:22:05.292446] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:09:31.968 [2024-07-25 12:22:05.292519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265806 ] 00:09:31.968 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.228 [2024-07-25 12:22:05.396781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.228 [2024-07-25 12:22:05.492000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.490 Running I/O for 1 seconds... 00:09:33.432 00:09:33.432 Latency(us) 00:09:33.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.432 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:33.432 Verification LBA range: start 0x0 length 0x400 00:09:33.432 Nvme0n1 : 1.04 1087.07 67.94 0.00 0.00 57461.30 6553.60 51622.20 00:09:33.432 =================================================================================================================== 00:09:33.432 Total : 1087.07 67.94 0.00 0.00 57461.30 6553.60 51622.20 00:09:33.693 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:33.693 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:33.693 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:33.693 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:33.693 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:33.693 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.693 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:33.693 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.693 rmmod nvme_tcp 00:09:33.693 rmmod nvme_fabrics 00:09:33.693 rmmod nvme_keyring 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 265403 ']' 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 265403 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 265403 ']' 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 265403 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.693 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 265403 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 265403' 00:09:33.955 killing process with pid 265403 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 265403 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 265403 00:09:33.955 [2024-07-25 12:22:07.311975] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.955 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:36.501 00:09:36.501 real 0m15.808s 00:09:36.501 user 0m24.856s 00:09:36.501 sys 0m7.372s 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.501 ************************************ 00:09:36.501 END TEST nvmf_host_management 00:09:36.501 ************************************ 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.501 ************************************ 00:09:36.501 START TEST nvmf_lvol 00:09:36.501 ************************************ 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:36.501 * Looking for test storage... 00:09:36.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:09:36.501 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:09:44.645 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:44.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:44.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:44.646 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:44.646 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:44.646 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:44.646 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:44.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:09:44.908 00:09:44.908 --- 10.0.0.2 ping statistics --- 00:09:44.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.908 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:09:44.908 00:09:44.908 --- 10.0.0.1 ping statistics --- 00:09:44.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.908 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=270600 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 270600 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 270600 ']' 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.908 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:44.908 [2024-07-25 12:22:18.184908] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:09:44.908 [2024-07-25 12:22:18.184969] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.908 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.908 [2024-07-25 12:22:18.277833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.170 [2024-07-25 12:22:18.370073] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.170 [2024-07-25 12:22:18.370133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.170 [2024-07-25 12:22:18.370141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.170 [2024-07-25 12:22:18.370148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.170 [2024-07-25 12:22:18.370153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.170 [2024-07-25 12:22:18.370240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.170 [2024-07-25 12:22:18.370392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.170 [2024-07-25 12:22:18.370393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.741 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.741 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:09:45.741 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:45.741 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:45.741 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:45.741 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.741 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:46.002 [2024-07-25 12:22:19.286634] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.002 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.262 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:46.262 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.523 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:46.523 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:46.783 12:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:47.043 12:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=66180803-0eb2-4ace-b353-73d67a204429 00:09:47.043 12:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 66180803-0eb2-4ace-b353-73d67a204429 lvol 20 00:09:47.304 12:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bec4351f-224a-498f-ad00-44a544c31484 00:09:47.304 12:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:47.304 12:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bec4351f-224a-498f-ad00-44a544c31484 00:09:47.565 12:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:47.825 [2024-07-25 12:22:21.098948] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.825 12:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:48.086 12:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=271245 00:09:48.086 12:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:48.086 12:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:48.086 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.026 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bec4351f-224a-498f-ad00-44a544c31484 MY_SNAPSHOT 00:09:49.286 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=85efb124-1597-4f6c-9581-6310c2d391a9 00:09:49.286 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bec4351f-224a-498f-ad00-44a544c31484 30 00:09:49.545 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 85efb124-1597-4f6c-9581-6310c2d391a9 MY_CLONE 00:09:49.805 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3a004f71-33c9-49bf-bf0e-57674fe0a530 00:09:49.805 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3a004f71-33c9-49bf-bf0e-57674fe0a530 00:09:50.066 12:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 271245 00:10:00.125 Initializing NVMe Controllers 00:10:00.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:00.125 Controller IO queue size 128, less than required. 00:10:00.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:00.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:00.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:00.125 Initialization complete. Launching workers. 00:10:00.125 ======================================================== 00:10:00.125 Latency(us) 00:10:00.125 Device Information : IOPS MiB/s Average min max 00:10:00.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9140.37 35.70 14004.76 1058.52 85064.37 00:10:00.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13272.04 51.84 9644.95 3433.27 54124.18 00:10:00.125 ======================================================== 00:10:00.125 Total : 22412.41 87.55 11423.00 1058.52 85064.37 00:10:00.125 00:10:00.125 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bec4351f-224a-498f-ad00-44a544c31484 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 66180803-0eb2-4ace-b353-73d67a204429 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:00.125 rmmod nvme_tcp 00:10:00.125 rmmod nvme_fabrics 00:10:00.125 rmmod nvme_keyring 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 270600 ']' 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 270600 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 270600 ']' 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 270600 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 270600 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 270600' 00:10:00.125 killing process with pid 270600 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 270600 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 270600 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.125 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.515 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:01.515 00:10:01.515 real 0m25.262s 00:10:01.515 user 1m6.982s 00:10:01.515 sys 0m8.930s 00:10:01.515 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.515 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:01.515 ************************************ 00:10:01.515 END TEST nvmf_lvol 00:10:01.515 ************************************ 00:10:01.515 12:22:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:01.515 12:22:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:01.515 12:22:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:01.515 12:22:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.515 12:22:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.515 ************************************ 00:10:01.515 START TEST nvmf_lvs_grow 00:10:01.515 ************************************ 00:10:01.515 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:01.777 * Looking for test storage... 00:10:01.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:01.777 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.777 12:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.777 12:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.777 12:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:01.777 12:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:01.777 12:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:10:01.777 12:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:09.916 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:09.916 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:09.916 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:09.917 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:09.917 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.917 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:10.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:10:10.177 00:10:10.177 --- 10.0.0.2 ping statistics --- 00:10:10.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.177 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:10:10.177 00:10:10.177 --- 10.0.0.1 ping statistics --- 00:10:10.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.177 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=277583 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 277583 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 277583 ']' 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.177 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:10.177 [2024-07-25 12:22:43.576195] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:10:10.177 [2024-07-25 12:22:43.576280] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.437 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.437 [2024-07-25 12:22:43.669183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.437 [2024-07-25 12:22:43.761420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.437 [2024-07-25 12:22:43.761480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.437 [2024-07-25 12:22:43.761488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.437 [2024-07-25 12:22:43.761494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.437 [2024-07-25 12:22:43.761500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.437 [2024-07-25 12:22:43.761524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:11.376 [2024-07-25 12:22:44.663729] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:11.376 ************************************ 00:10:11.376 START TEST lvs_grow_clean 00:10:11.376 ************************************ 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:11.376 12:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:11.945 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:11.945 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:12.206 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e20cf96e-9477-4804-8622-73d18d253449 00:10:12.206 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e20cf96e-9477-4804-8622-73d18d253449 00:10:12.206 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:12.466 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:12.466 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:12.466 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e20cf96e-9477-4804-8622-73d18d253449 lvol 150 00:10:12.727 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=174a69a1-f05e-4fad-9ee3-a0341080e046 00:10:12.727 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:12.727 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:12.727 [2024-07-25 12:22:46.125723] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:12.727 [2024-07-25 12:22:46.125791] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:12.727 true 00:10:12.727 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e20cf96e-9477-4804-8622-73d18d253449 00:10:12.988 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:12.988 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:12.988 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:13.248 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 174a69a1-f05e-4fad-9ee3-a0341080e046 00:10:13.508 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:13.768 [2024-07-25 12:22:46.952244] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.768 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:13.768 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=278240 00:10:13.768 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:13.768 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:13.768 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 278240 /var/tmp/bdevperf.sock 00:10:13.768 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 278240 ']' 00:10:13.768 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:13.768 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.768 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:13.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:13.768 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.768 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:14.028 [2024-07-25 12:22:47.223724] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:10:14.028 [2024-07-25 12:22:47.223794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278240 ] 00:10:14.028 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.028 [2024-07-25 12:22:47.305666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.028 [2024-07-25 12:22:47.413289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.996 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:14.996 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:10:14.996 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:14.996 Nvme0n1 00:10:14.996 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:15.257 [ 00:10:15.257 { 00:10:15.257 "name": "Nvme0n1", 00:10:15.257 "aliases": [ 00:10:15.257 "174a69a1-f05e-4fad-9ee3-a0341080e046" 00:10:15.257 ], 00:10:15.257 "product_name": "NVMe disk", 00:10:15.257 "block_size": 4096, 00:10:15.257 "num_blocks": 38912, 00:10:15.257 "uuid": "174a69a1-f05e-4fad-9ee3-a0341080e046", 00:10:15.257 "assigned_rate_limits": { 00:10:15.257 "rw_ios_per_sec": 0, 00:10:15.257 "rw_mbytes_per_sec": 0, 00:10:15.257 "r_mbytes_per_sec": 0, 00:10:15.257 "w_mbytes_per_sec": 0 00:10:15.257 }, 00:10:15.257 "claimed": false, 00:10:15.257 "zoned": false, 00:10:15.257 "supported_io_types": { 00:10:15.257 "read": true, 00:10:15.257 "write": true, 00:10:15.257 "unmap": true, 00:10:15.257 "flush": true, 00:10:15.257 "reset": true, 00:10:15.257 "nvme_admin": true, 00:10:15.257 "nvme_io": true, 00:10:15.257 "nvme_io_md": false, 00:10:15.257 "write_zeroes": true, 00:10:15.257 "zcopy": false, 00:10:15.257 "get_zone_info": false, 00:10:15.257 "zone_management": false, 00:10:15.257 "zone_append": false, 00:10:15.257 "compare": true, 00:10:15.257 "compare_and_write": true, 00:10:15.257 "abort": true, 00:10:15.257 "seek_hole": false, 00:10:15.257 "seek_data": false, 00:10:15.257 "copy": true, 00:10:15.257 "nvme_iov_md": false 00:10:15.257 }, 00:10:15.257 "memory_domains": [ 00:10:15.257 { 00:10:15.257 "dma_device_id": "system", 00:10:15.257 "dma_device_type": 1 00:10:15.257 } 00:10:15.257 ], 00:10:15.257 "driver_specific": { 00:10:15.257 "nvme": [ 00:10:15.257 { 00:10:15.257 "trid": { 00:10:15.257 "trtype": "TCP", 00:10:15.257 "adrfam": "IPv4", 00:10:15.257 "traddr": "10.0.0.2", 00:10:15.257 "trsvcid": "4420", 00:10:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:15.257 }, 00:10:15.257 "ctrlr_data": { 00:10:15.257 "cntlid": 1, 00:10:15.257 "vendor_id": "0x8086", 00:10:15.257 "model_number": "SPDK bdev Controller", 00:10:15.257 "serial_number": "SPDK0", 00:10:15.257 "firmware_revision": "24.09", 00:10:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:15.257 "oacs": { 00:10:15.257 "security": 0, 00:10:15.257 "format": 0, 00:10:15.257 "firmware": 0, 00:10:15.257 "ns_manage": 0 00:10:15.257 }, 00:10:15.257 "multi_ctrlr": true, 00:10:15.257 "ana_reporting": false 00:10:15.257 }, 00:10:15.257 "vs": { 00:10:15.257 "nvme_version": "1.3" 00:10:15.257 }, 00:10:15.257 "ns_data": { 00:10:15.257 "id": 1, 00:10:15.257 "can_share": true 00:10:15.257 } 00:10:15.257 } 00:10:15.257 ], 00:10:15.257 "mp_policy": "active_passive" 00:10:15.257 } 00:10:15.257 } 00:10:15.257 ] 00:10:15.257 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=278515 00:10:15.257 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:15.257 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:15.257 Running I/O for 10 seconds... 00:10:16.640 Latency(us) 00:10:16.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.640 Nvme0n1 : 1.00 19756.00 77.17 0.00 0.00 0.00 0.00 0.00 00:10:16.640 =================================================================================================================== 00:10:16.640 Total : 19756.00 77.17 0.00 0.00 0.00 0.00 0.00 00:10:16.640 00:10:17.210 12:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e20cf96e-9477-4804-8622-73d18d253449 00:10:17.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.470 Nvme0n1 : 2.00 19824.00 77.44 0.00 0.00 0.00 0.00 0.00 00:10:17.470 =================================================================================================================== 00:10:17.470 Total : 19824.00 77.44 0.00 0.00 0.00 0.00 0.00 00:10:17.470 00:10:17.470 true 00:10:17.470 12:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e20cf96e-9477-4804-8622-73d18d253449 00:10:17.470 12:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:17.731 12:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:17.731 12:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:17.731 12:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 278515 00:10:18.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.301 Nvme0n1 : 3.00 19865.33 77.60 0.00 0.00 0.00 0.00 0.00 00:10:18.301 =================================================================================================================== 00:10:18.301 Total : 19865.33 77.60 0.00 0.00 0.00 0.00 0.00 00:10:18.301 00:10:19.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.684 Nvme0n1 : 4.00 19898.50 77.73 0.00 0.00 0.00 0.00 0.00 00:10:19.684 =================================================================================================================== 00:10:19.684 Total : 19898.50 77.73 0.00 0.00 0.00 0.00 0.00 00:10:19.684 00:10:20.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.255 Nvme0n1 : 5.00 19922.00 77.82 0.00 0.00 0.00 0.00 0.00 00:10:20.255 =================================================================================================================== 00:10:20.255 Total : 19922.00 77.82 0.00 0.00 0.00 0.00 0.00 00:10:20.255 00:10:21.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.639 Nvme0n1 : 6.00 19942.17 77.90 0.00 0.00 0.00 0.00 0.00 00:10:21.639 =================================================================================================================== 00:10:21.639 Total : 19942.17 77.90 0.00 0.00 0.00 0.00 0.00 00:10:21.639 00:10:22.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.579 Nvme0n1 : 7.00 19963.86 77.98 0.00 0.00 0.00 0.00 0.00 00:10:22.579 =================================================================================================================== 00:10:22.579 Total : 19963.86 77.98 0.00 0.00 0.00 0.00 0.00 00:10:22.579 00:10:23.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.520 Nvme0n1 : 8.00 19974.50 78.03 0.00 0.00 0.00 0.00 0.00 00:10:23.520 =================================================================================================================== 00:10:23.520 Total : 19974.50 78.03 0.00 0.00 0.00 0.00 0.00 00:10:23.520 00:10:24.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.462 Nvme0n1 : 9.00 19982.78 78.06 0.00 0.00 0.00 0.00 0.00 00:10:24.462 =================================================================================================================== 00:10:24.462 Total : 19982.78 78.06 0.00 0.00 0.00 0.00 0.00 00:10:24.462 00:10:25.461 00:10:25.461 Latency(us) 00:10:25.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.461 Nvme0n1 : 10.00 19984.31 78.06 0.00 0.00 6398.83 2949.12 10989.88 00:10:25.461 =================================================================================================================== 00:10:25.461 Total : 19984.31 78.06 0.00 0.00 6398.83 2949.12 10989.88 00:10:25.461 0 00:10:25.461 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 278240 00:10:25.461 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 278240 ']' 00:10:25.461 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 278240 00:10:25.461 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:10:25.461 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.461 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 278240 00:10:25.461 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:25.461 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:25.461 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 278240' 00:10:25.461 killing process with pid 278240 00:10:25.461 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 278240 00:10:25.461 Received shutdown signal, test time was about 10.000000 seconds 00:10:25.461 00:10:25.461 Latency(us) 00:10:25.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.461 =================================================================================================================== 00:10:25.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:25.461 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 278240 00:10:25.744 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:25.744 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:26.005 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e20cf96e-9477-4804-8622-73d18d253449 00:10:26.005 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:26.265 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:26.265 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:26.265 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:26.265 [2024-07-25 12:22:59.641845] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e20cf96e-9477-4804-8622-73d18d253449 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e20cf96e-9477-4804-8622-73d18d253449 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e20cf96e-9477-4804-8622-73d18d253449 00:10:26.526 request: 00:10:26.526 { 00:10:26.526 "uuid": "e20cf96e-9477-4804-8622-73d18d253449", 00:10:26.526 "method": "bdev_lvol_get_lvstores", 00:10:26.526 "req_id": 1 00:10:26.526 } 00:10:26.526 Got JSON-RPC error response 00:10:26.526 response: 00:10:26.526 { 00:10:26.526 "code": -19, 00:10:26.526 "message": "No such device" 00:10:26.526 } 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:26.526 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:26.787 aio_bdev 00:10:26.787 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 174a69a1-f05e-4fad-9ee3-a0341080e046 00:10:26.787 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=174a69a1-f05e-4fad-9ee3-a0341080e046 00:10:26.787 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:26.787 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:10:26.787 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:26.787 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:26.787 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:27.048 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 174a69a1-f05e-4fad-9ee3-a0341080e046 -t 2000 00:10:27.048 [ 00:10:27.048 { 00:10:27.048 "name": "174a69a1-f05e-4fad-9ee3-a0341080e046", 00:10:27.048 "aliases": [ 00:10:27.048 "lvs/lvol" 00:10:27.048 ], 00:10:27.048 "product_name": "Logical Volume", 00:10:27.048 "block_size": 4096, 00:10:27.048 "num_blocks": 38912, 00:10:27.048 "uuid": "174a69a1-f05e-4fad-9ee3-a0341080e046", 00:10:27.048 "assigned_rate_limits": { 00:10:27.048 "rw_ios_per_sec": 0, 00:10:27.048 "rw_mbytes_per_sec": 0, 00:10:27.048 "r_mbytes_per_sec": 0, 00:10:27.048 "w_mbytes_per_sec": 0 00:10:27.048 }, 00:10:27.048 "claimed": false, 00:10:27.048 "zoned": false, 00:10:27.048 "supported_io_types": { 00:10:27.048 "read": true, 00:10:27.048 "write": true, 00:10:27.048 "unmap": true, 00:10:27.048 "flush": false, 00:10:27.048 "reset": true, 00:10:27.048 "nvme_admin": false, 00:10:27.048 "nvme_io": false, 00:10:27.048 "nvme_io_md": false, 00:10:27.048 "write_zeroes": true, 00:10:27.048 "zcopy": false, 00:10:27.048 "get_zone_info": false, 00:10:27.048 "zone_management": false, 00:10:27.048 "zone_append": false, 00:10:27.048 "compare": false, 00:10:27.048 "compare_and_write": false, 00:10:27.048 "abort": false, 00:10:27.048 "seek_hole": true, 00:10:27.048 "seek_data": true, 00:10:27.048 "copy": false, 00:10:27.048 "nvme_iov_md": false 00:10:27.048 }, 00:10:27.048 "driver_specific": { 00:10:27.048 "lvol": { 00:10:27.048 "lvol_store_uuid": "e20cf96e-9477-4804-8622-73d18d253449", 00:10:27.048 "base_bdev": "aio_bdev", 00:10:27.048 "thin_provision": false, 00:10:27.048 "num_allocated_clusters": 38, 00:10:27.048 "snapshot": false, 00:10:27.048 "clone": false, 00:10:27.048 "esnap_clone": false 00:10:27.048 } 00:10:27.048 } 00:10:27.048 } 00:10:27.048 ] 00:10:27.048 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:10:27.309 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e20cf96e-9477-4804-8622-73d18d253449 00:10:27.309 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:27.309 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:27.309 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e20cf96e-9477-4804-8622-73d18d253449 00:10:27.309 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:27.570 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:27.570 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 174a69a1-f05e-4fad-9ee3-a0341080e046 00:10:27.831 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e20cf96e-9477-4804-8622-73d18d253449 00:10:28.092 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:28.092 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:28.092 00:10:28.092 real 0m16.734s 00:10:28.092 user 0m16.454s 00:10:28.092 sys 0m1.507s 00:10:28.092 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.092 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:28.092 ************************************ 00:10:28.092 END TEST lvs_grow_clean 00:10:28.092 ************************************ 00:10:28.092 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:28.092 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:28.092 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:28.092 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.092 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:28.353 ************************************ 00:10:28.353 START TEST lvs_grow_dirty 00:10:28.353 ************************************ 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:28.353 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:28.614 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:28.614 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:28.614 12:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:28.874 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:28.874 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:28.874 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 lvol 150 00:10:29.135 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=86ff16dc-9265-4195-8d4e-3823d44980e7 00:10:29.135 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:29.135 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:29.135 [2024-07-25 12:23:02.529425] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:29.135 [2024-07-25 12:23:02.529476] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:29.135 true 00:10:29.135 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:29.135 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:29.395 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:29.395 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:29.656 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 86ff16dc-9265-4195-8d4e-3823d44980e7 00:10:29.917 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:30.176 [2024-07-25 12:23:03.375898] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.176 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:30.746 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=281057 00:10:30.746 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:30.746 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:30.746 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 281057 /var/tmp/bdevperf.sock 00:10:30.746 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 281057 ']' 00:10:30.746 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:30.746 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.746 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:30.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:30.746 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.746 12:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:30.746 [2024-07-25 12:23:03.990305] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:10:30.746 [2024-07-25 12:23:03.990360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281057 ] 00:10:30.746 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.746 [2024-07-25 12:23:04.066859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.746 [2024-07-25 12:23:04.144383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.687 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.687 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:31.687 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:32.257 Nvme0n1 00:10:32.257 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:32.257 [ 00:10:32.257 { 00:10:32.257 "name": "Nvme0n1", 00:10:32.257 "aliases": [ 00:10:32.257 "86ff16dc-9265-4195-8d4e-3823d44980e7" 00:10:32.257 ], 00:10:32.257 "product_name": "NVMe disk", 00:10:32.257 "block_size": 4096, 00:10:32.257 "num_blocks": 38912, 00:10:32.257 "uuid": "86ff16dc-9265-4195-8d4e-3823d44980e7", 00:10:32.257 "assigned_rate_limits": { 00:10:32.257 "rw_ios_per_sec": 0, 00:10:32.257 "rw_mbytes_per_sec": 0, 00:10:32.257 "r_mbytes_per_sec": 0, 00:10:32.257 "w_mbytes_per_sec": 0 00:10:32.257 }, 00:10:32.257 "claimed": false, 00:10:32.257 "zoned": false, 00:10:32.257 "supported_io_types": { 00:10:32.257 "read": true, 00:10:32.257 "write": true, 00:10:32.257 "unmap": true, 00:10:32.257 "flush": true, 00:10:32.257 "reset": true, 00:10:32.257 "nvme_admin": true, 00:10:32.257 "nvme_io": true, 00:10:32.257 "nvme_io_md": false, 00:10:32.257 "write_zeroes": true, 00:10:32.257 "zcopy": false, 00:10:32.257 "get_zone_info": false, 00:10:32.257 "zone_management": false, 00:10:32.257 "zone_append": false, 00:10:32.257 "compare": true, 00:10:32.257 "compare_and_write": true, 00:10:32.257 "abort": true, 00:10:32.257 "seek_hole": false, 00:10:32.257 "seek_data": false, 00:10:32.257 "copy": true, 00:10:32.257 "nvme_iov_md": false 00:10:32.258 }, 00:10:32.258 "memory_domains": [ 00:10:32.258 { 00:10:32.258 "dma_device_id": "system", 00:10:32.258 "dma_device_type": 1 00:10:32.258 } 00:10:32.258 ], 00:10:32.258 "driver_specific": { 00:10:32.258 "nvme": [ 00:10:32.258 { 00:10:32.258 "trid": { 00:10:32.258 "trtype": "TCP", 00:10:32.258 "adrfam": "IPv4", 00:10:32.258 "traddr": "10.0.0.2", 00:10:32.258 "trsvcid": "4420", 00:10:32.258 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:32.258 }, 00:10:32.258 "ctrlr_data": { 00:10:32.258 "cntlid": 1, 00:10:32.258 "vendor_id": "0x8086", 00:10:32.258 "model_number": "SPDK bdev Controller", 00:10:32.258 "serial_number": "SPDK0", 00:10:32.258 "firmware_revision": "24.09", 00:10:32.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:32.258 "oacs": { 00:10:32.258 "security": 0, 00:10:32.258 "format": 0, 00:10:32.258 "firmware": 0, 00:10:32.258 "ns_manage": 0 00:10:32.258 }, 00:10:32.258 "multi_ctrlr": true, 00:10:32.258 "ana_reporting": false 00:10:32.258 }, 00:10:32.258 "vs": { 00:10:32.258 "nvme_version": "1.3" 00:10:32.258 }, 00:10:32.258 "ns_data": { 00:10:32.258 "id": 1, 00:10:32.258 "can_share": true 00:10:32.258 } 00:10:32.258 } 00:10:32.258 ], 00:10:32.258 "mp_policy": "active_passive" 00:10:32.258 } 00:10:32.258 } 00:10:32.258 ] 00:10:32.258 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=281367 00:10:32.258 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:32.258 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:32.517 Running I/O for 10 seconds... 00:10:33.459 Latency(us) 00:10:33.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.459 Nvme0n1 : 1.00 19707.00 76.98 0.00 0.00 0.00 0.00 0.00 00:10:33.459 =================================================================================================================== 00:10:33.459 Total : 19707.00 76.98 0.00 0.00 0.00 0.00 0.00 00:10:33.459 00:10:34.400 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:34.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.400 Nvme0n1 : 2.00 19788.50 77.30 0.00 0.00 0.00 0.00 0.00 00:10:34.400 =================================================================================================================== 00:10:34.400 Total : 19788.50 77.30 0.00 0.00 0.00 0.00 0.00 00:10:34.400 00:10:34.661 true 00:10:34.661 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:34.661 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:34.661 12:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:34.661 12:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:34.661 12:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 281367 00:10:35.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.603 Nvme0n1 : 3.00 19846.33 77.52 0.00 0.00 0.00 0.00 0.00 00:10:35.603 =================================================================================================================== 00:10:35.603 Total : 19846.33 77.52 0.00 0.00 0.00 0.00 0.00 00:10:35.603 00:10:36.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.544 Nvme0n1 : 4.00 19892.00 77.70 0.00 0.00 0.00 0.00 0.00 00:10:36.544 =================================================================================================================== 00:10:36.544 Total : 19892.00 77.70 0.00 0.00 0.00 0.00 0.00 00:10:36.544 00:10:37.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.486 Nvme0n1 : 5.00 19901.00 77.74 0.00 0.00 0.00 0.00 0.00 00:10:37.486 =================================================================================================================== 00:10:37.486 Total : 19901.00 77.74 0.00 0.00 0.00 0.00 0.00 00:10:37.486 00:10:38.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.426 Nvme0n1 : 6.00 19925.17 77.83 0.00 0.00 0.00 0.00 0.00 00:10:38.426 =================================================================================================================== 00:10:38.426 Total : 19925.17 77.83 0.00 0.00 0.00 0.00 0.00 00:10:38.426 00:10:39.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.367 Nvme0n1 : 7.00 19941.57 77.90 0.00 0.00 0.00 0.00 0.00 00:10:39.367 =================================================================================================================== 00:10:39.367 Total : 19941.57 77.90 0.00 0.00 0.00 0.00 0.00 00:10:39.367 00:10:40.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.750 Nvme0n1 : 8.00 19954.25 77.95 0.00 0.00 0.00 0.00 0.00 00:10:40.750 =================================================================================================================== 00:10:40.750 Total : 19954.25 77.95 0.00 0.00 0.00 0.00 0.00 00:10:40.750 00:10:41.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.691 Nvme0n1 : 9.00 19962.78 77.98 0.00 0.00 0.00 0.00 0.00 00:10:41.691 =================================================================================================================== 00:10:41.692 Total : 19962.78 77.98 0.00 0.00 0.00 0.00 0.00 00:10:41.692 00:10:42.633 00:10:42.633 Latency(us) 00:10:42.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.633 Nvme0n1 : 10.00 19966.75 78.00 0.00 0.00 6404.58 2961.72 13006.38 00:10:42.633 =================================================================================================================== 00:10:42.633 Total : 19966.75 78.00 0.00 0.00 6404.58 2961.72 13006.38 00:10:42.633 0 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 281057 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 281057 ']' 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 281057 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 281057 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 281057' 00:10:42.633 killing process with pid 281057 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 281057 00:10:42.633 Received shutdown signal, test time was about 10.000000 seconds 00:10:42.633 00:10:42.633 Latency(us) 00:10:42.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.633 =================================================================================================================== 00:10:42.633 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 281057 00:10:42.633 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:42.894 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:43.154 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:43.154 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 277583 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 277583 00:10:43.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 277583 Killed "${NVMF_APP[@]}" "$@" 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=283215 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 283215 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 283215 ']' 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.724 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:43.724 [2024-07-25 12:23:17.007663] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:10:43.724 [2024-07-25 12:23:17.007718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.724 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.724 [2024-07-25 12:23:17.095608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.984 [2024-07-25 12:23:17.158086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.984 [2024-07-25 12:23:17.158119] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.984 [2024-07-25 12:23:17.158126] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.984 [2024-07-25 12:23:17.158132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.984 [2024-07-25 12:23:17.158137] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.984 [2024-07-25 12:23:17.158153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.554 12:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.554 12:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:44.554 12:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:44.555 12:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:44.555 12:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:44.555 12:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.555 12:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:44.815 [2024-07-25 12:23:18.061089] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:44.815 [2024-07-25 12:23:18.061172] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:44.815 [2024-07-25 12:23:18.061199] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:44.815 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:44.815 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 86ff16dc-9265-4195-8d4e-3823d44980e7 00:10:44.815 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=86ff16dc-9265-4195-8d4e-3823d44980e7 00:10:44.815 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:44.815 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:44.815 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:44.815 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:44.815 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:45.075 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 86ff16dc-9265-4195-8d4e-3823d44980e7 -t 2000 00:10:45.075 [ 00:10:45.075 { 00:10:45.075 "name": "86ff16dc-9265-4195-8d4e-3823d44980e7", 00:10:45.075 "aliases": [ 00:10:45.075 "lvs/lvol" 00:10:45.075 ], 00:10:45.075 "product_name": "Logical Volume", 00:10:45.075 "block_size": 4096, 00:10:45.075 "num_blocks": 38912, 00:10:45.075 "uuid": "86ff16dc-9265-4195-8d4e-3823d44980e7", 00:10:45.075 "assigned_rate_limits": { 00:10:45.075 "rw_ios_per_sec": 0, 00:10:45.075 "rw_mbytes_per_sec": 0, 00:10:45.075 "r_mbytes_per_sec": 0, 00:10:45.075 "w_mbytes_per_sec": 0 00:10:45.075 }, 00:10:45.075 "claimed": false, 00:10:45.075 "zoned": false, 00:10:45.075 "supported_io_types": { 00:10:45.075 "read": true, 00:10:45.075 "write": true, 00:10:45.075 "unmap": true, 00:10:45.075 "flush": false, 00:10:45.075 "reset": true, 00:10:45.075 "nvme_admin": false, 00:10:45.075 "nvme_io": false, 00:10:45.075 "nvme_io_md": false, 00:10:45.075 "write_zeroes": true, 00:10:45.075 "zcopy": false, 00:10:45.075 "get_zone_info": false, 00:10:45.075 "zone_management": false, 00:10:45.075 "zone_append": false, 00:10:45.075 "compare": false, 00:10:45.075 "compare_and_write": false, 00:10:45.075 "abort": false, 00:10:45.075 "seek_hole": true, 00:10:45.075 "seek_data": true, 00:10:45.075 "copy": false, 00:10:45.075 "nvme_iov_md": false 00:10:45.075 }, 00:10:45.075 "driver_specific": { 00:10:45.075 "lvol": { 00:10:45.075 "lvol_store_uuid": "f4430817-ce6f-4bf2-a9ae-827a81be2fc9", 00:10:45.075 "base_bdev": "aio_bdev", 00:10:45.075 "thin_provision": false, 00:10:45.075 "num_allocated_clusters": 38, 00:10:45.075 "snapshot": false, 00:10:45.075 "clone": false, 00:10:45.075 "esnap_clone": false 00:10:45.075 } 00:10:45.075 } 00:10:45.075 } 00:10:45.075 ] 00:10:45.075 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:45.075 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:45.075 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:45.336 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:45.336 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:45.336 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:45.596 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:45.596 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:46.231 [2024-07-25 12:23:19.418727] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:46.231 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:46.492 request: 00:10:46.492 { 00:10:46.492 "uuid": "f4430817-ce6f-4bf2-a9ae-827a81be2fc9", 00:10:46.492 "method": "bdev_lvol_get_lvstores", 00:10:46.492 "req_id": 1 00:10:46.492 } 00:10:46.492 Got JSON-RPC error response 00:10:46.492 response: 00:10:46.492 { 00:10:46.492 "code": -19, 00:10:46.492 "message": "No such device" 00:10:46.492 } 00:10:46.492 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:46.492 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:46.492 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:46.492 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:46.492 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:46.751 aio_bdev 00:10:46.751 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 86ff16dc-9265-4195-8d4e-3823d44980e7 00:10:46.751 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=86ff16dc-9265-4195-8d4e-3823d44980e7 00:10:46.751 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:46.751 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:46.751 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:46.751 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:46.751 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:47.010 12:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 86ff16dc-9265-4195-8d4e-3823d44980e7 -t 2000 00:10:47.010 [ 00:10:47.010 { 00:10:47.010 "name": "86ff16dc-9265-4195-8d4e-3823d44980e7", 00:10:47.010 "aliases": [ 00:10:47.010 "lvs/lvol" 00:10:47.010 ], 00:10:47.010 "product_name": "Logical Volume", 00:10:47.010 "block_size": 4096, 00:10:47.010 "num_blocks": 38912, 00:10:47.010 "uuid": "86ff16dc-9265-4195-8d4e-3823d44980e7", 00:10:47.010 "assigned_rate_limits": { 00:10:47.010 "rw_ios_per_sec": 0, 00:10:47.010 "rw_mbytes_per_sec": 0, 00:10:47.010 "r_mbytes_per_sec": 0, 00:10:47.010 "w_mbytes_per_sec": 0 00:10:47.010 }, 00:10:47.010 "claimed": false, 00:10:47.010 "zoned": false, 00:10:47.010 "supported_io_types": { 00:10:47.010 "read": true, 00:10:47.010 "write": true, 00:10:47.010 "unmap": true, 00:10:47.010 "flush": false, 00:10:47.010 "reset": true, 00:10:47.010 "nvme_admin": false, 00:10:47.010 "nvme_io": false, 00:10:47.010 "nvme_io_md": false, 00:10:47.010 "write_zeroes": true, 00:10:47.010 "zcopy": false, 00:10:47.010 "get_zone_info": false, 00:10:47.010 "zone_management": false, 00:10:47.010 "zone_append": false, 00:10:47.010 "compare": false, 00:10:47.010 "compare_and_write": false, 00:10:47.010 "abort": false, 00:10:47.010 "seek_hole": true, 00:10:47.010 "seek_data": true, 00:10:47.010 "copy": false, 00:10:47.010 "nvme_iov_md": false 00:10:47.010 }, 00:10:47.010 "driver_specific": { 00:10:47.010 "lvol": { 00:10:47.010 "lvol_store_uuid": "f4430817-ce6f-4bf2-a9ae-827a81be2fc9", 00:10:47.010 "base_bdev": "aio_bdev", 00:10:47.010 "thin_provision": false, 00:10:47.010 "num_allocated_clusters": 38, 00:10:47.010 "snapshot": false, 00:10:47.010 "clone": false, 00:10:47.010 "esnap_clone": false 00:10:47.011 } 00:10:47.011 } 00:10:47.011 } 00:10:47.011 ] 00:10:47.011 12:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:47.011 12:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:47.011 12:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:47.270 12:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:47.270 12:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:47.270 12:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:47.839 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:47.839 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 86ff16dc-9265-4195-8d4e-3823d44980e7 00:10:48.099 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f4430817-ce6f-4bf2-a9ae-827a81be2fc9 00:10:48.358 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:48.358 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:48.618 00:10:48.618 real 0m20.257s 00:10:48.618 user 0m51.365s 00:10:48.618 sys 0m3.121s 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:48.618 ************************************ 00:10:48.618 END TEST lvs_grow_dirty 00:10:48.618 ************************************ 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:48.618 nvmf_trace.0 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:48.618 rmmod nvme_tcp 00:10:48.618 rmmod nvme_fabrics 00:10:48.618 rmmod nvme_keyring 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 283215 ']' 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 283215 00:10:48.618 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 283215 ']' 00:10:48.619 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 283215 00:10:48.619 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:10:48.619 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:48.619 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 283215 00:10:48.619 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:48.619 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:48.619 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 283215' 00:10:48.619 killing process with pid 283215 00:10:48.619 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 283215 00:10:48.619 12:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 283215 00:10:48.879 12:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:48.879 12:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:48.879 12:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:48.879 12:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:48.879 12:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:48.879 12:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.879 12:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.879 12:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.788 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:50.788 00:10:50.788 real 0m49.340s 00:10:50.788 user 1m15.663s 00:10:50.788 sys 0m11.511s 00:10:50.788 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.788 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:50.788 ************************************ 00:10:50.788 END TEST nvmf_lvs_grow 00:10:50.788 ************************************ 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:51.049 ************************************ 00:10:51.049 START TEST nvmf_bdev_io_wait 00:10:51.049 ************************************ 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:51.049 * Looking for test storage... 00:10:51.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.049 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:10:51.050 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:59.221 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:59.221 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:59.221 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.221 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:59.222 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.222 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:59.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:10:59.482 00:10:59.482 --- 10.0.0.2 ping statistics --- 00:10:59.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.482 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:10:59.482 00:10:59.482 --- 10.0.0.1 ping statistics --- 00:10:59.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.482 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:59.482 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=288683 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 288683 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 288683 ']' 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.742 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.742 [2024-07-25 12:23:32.966829] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:10:59.742 [2024-07-25 12:23:32.966888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.742 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.742 [2024-07-25 12:23:33.059313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.742 [2024-07-25 12:23:33.153383] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.742 [2024-07-25 12:23:33.153440] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.742 [2024-07-25 12:23:33.153448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.742 [2024-07-25 12:23:33.153454] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.742 [2024-07-25 12:23:33.153460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.742 [2024-07-25 12:23:33.153590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.742 [2024-07-25 12:23:33.153718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.742 [2024-07-25 12:23:33.153847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.742 [2024-07-25 12:23:33.153849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:00.684 [2024-07-25 12:23:33.932024] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:00.684 Malloc0 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:00.684 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:00.684 [2024-07-25 12:23:34.009162] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=288736 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=288738 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:00.684 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:00.684 { 00:11:00.684 "params": { 00:11:00.684 "name": "Nvme$subsystem", 00:11:00.684 "trtype": "$TEST_TRANSPORT", 00:11:00.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:00.684 "adrfam": "ipv4", 00:11:00.684 "trsvcid": "$NVMF_PORT", 00:11:00.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:00.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:00.684 "hdgst": ${hdgst:-false}, 00:11:00.684 "ddgst": ${ddgst:-false} 00:11:00.684 }, 00:11:00.684 "method": "bdev_nvme_attach_controller" 00:11:00.684 } 00:11:00.684 EOF 00:11:00.684 )") 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=288740 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:00.685 { 00:11:00.685 "params": { 00:11:00.685 "name": "Nvme$subsystem", 00:11:00.685 "trtype": "$TEST_TRANSPORT", 00:11:00.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:00.685 "adrfam": "ipv4", 00:11:00.685 "trsvcid": "$NVMF_PORT", 00:11:00.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:00.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:00.685 "hdgst": ${hdgst:-false}, 00:11:00.685 "ddgst": ${ddgst:-false} 00:11:00.685 }, 00:11:00.685 "method": "bdev_nvme_attach_controller" 00:11:00.685 } 00:11:00.685 EOF 00:11:00.685 )") 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=288743 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:00.685 { 00:11:00.685 "params": { 00:11:00.685 "name": "Nvme$subsystem", 00:11:00.685 "trtype": "$TEST_TRANSPORT", 00:11:00.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:00.685 "adrfam": "ipv4", 00:11:00.685 "trsvcid": "$NVMF_PORT", 00:11:00.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:00.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:00.685 "hdgst": ${hdgst:-false}, 00:11:00.685 "ddgst": ${ddgst:-false} 00:11:00.685 }, 00:11:00.685 "method": "bdev_nvme_attach_controller" 00:11:00.685 } 00:11:00.685 EOF 00:11:00.685 )") 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:00.685 { 00:11:00.685 "params": { 00:11:00.685 "name": "Nvme$subsystem", 00:11:00.685 "trtype": "$TEST_TRANSPORT", 00:11:00.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:00.685 "adrfam": "ipv4", 00:11:00.685 "trsvcid": "$NVMF_PORT", 00:11:00.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:00.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:00.685 "hdgst": ${hdgst:-false}, 00:11:00.685 "ddgst": ${ddgst:-false} 00:11:00.685 }, 00:11:00.685 "method": "bdev_nvme_attach_controller" 00:11:00.685 } 00:11:00.685 EOF 00:11:00.685 )") 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 288736 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:00.685 "params": { 00:11:00.685 "name": "Nvme1", 00:11:00.685 "trtype": "tcp", 00:11:00.685 "traddr": "10.0.0.2", 00:11:00.685 "adrfam": "ipv4", 00:11:00.685 "trsvcid": "4420", 00:11:00.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:00.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:00.685 "hdgst": false, 00:11:00.685 "ddgst": false 00:11:00.685 }, 00:11:00.685 "method": "bdev_nvme_attach_controller" 00:11:00.685 }' 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:00.685 "params": { 00:11:00.685 "name": "Nvme1", 00:11:00.685 "trtype": "tcp", 00:11:00.685 "traddr": "10.0.0.2", 00:11:00.685 "adrfam": "ipv4", 00:11:00.685 "trsvcid": "4420", 00:11:00.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:00.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:00.685 "hdgst": false, 00:11:00.685 "ddgst": false 00:11:00.685 }, 00:11:00.685 "method": "bdev_nvme_attach_controller" 00:11:00.685 }' 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:00.685 "params": { 00:11:00.685 "name": "Nvme1", 00:11:00.685 "trtype": "tcp", 00:11:00.685 "traddr": "10.0.0.2", 00:11:00.685 "adrfam": "ipv4", 00:11:00.685 "trsvcid": "4420", 00:11:00.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:00.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:00.685 "hdgst": false, 00:11:00.685 "ddgst": false 00:11:00.685 }, 00:11:00.685 "method": "bdev_nvme_attach_controller" 00:11:00.685 }' 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:00.685 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:00.685 "params": { 00:11:00.685 "name": "Nvme1", 00:11:00.685 "trtype": "tcp", 00:11:00.685 "traddr": "10.0.0.2", 00:11:00.685 "adrfam": "ipv4", 00:11:00.685 "trsvcid": "4420", 00:11:00.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:00.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:00.685 "hdgst": false, 00:11:00.685 "ddgst": false 00:11:00.685 }, 00:11:00.685 "method": "bdev_nvme_attach_controller" 00:11:00.685 }' 00:11:00.685 [2024-07-25 12:23:34.063368] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:11:00.685 [2024-07-25 12:23:34.063436] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:00.685 [2024-07-25 12:23:34.065141] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:11:00.685 [2024-07-25 12:23:34.065201] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:00.685 [2024-07-25 12:23:34.087782] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:11:00.685 [2024-07-25 12:23:34.087851] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:00.946 [2024-07-25 12:23:34.106465] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:11:00.946 [2024-07-25 12:23:34.106607] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:00.946 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.946 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.946 [2024-07-25 12:23:34.249835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.946 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.946 [2024-07-25 12:23:34.319469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:01.207 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.207 [2024-07-25 12:23:34.390889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.207 [2024-07-25 12:23:34.437151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.207 [2024-07-25 12:23:34.508579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:11:01.207 [2024-07-25 12:23:34.508619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.207 [2024-07-25 12:23:34.527615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:01.207 [2024-07-25 12:23:34.572344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:01.468 Running I/O for 1 seconds... 00:11:01.468 Running I/O for 1 seconds... 00:11:01.468 Running I/O for 1 seconds... 00:11:01.468 Running I/O for 1 seconds... 00:11:02.410 00:11:02.410 Latency(us) 00:11:02.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.410 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:02.410 Nvme1n1 : 1.00 203001.88 792.98 0.00 0.00 628.15 256.79 746.73 00:11:02.410 =================================================================================================================== 00:11:02.410 Total : 203001.88 792.98 0.00 0.00 628.15 256.79 746.73 00:11:02.410 00:11:02.410 Latency(us) 00:11:02.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.410 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:02.410 Nvme1n1 : 1.02 7085.34 27.68 0.00 0.00 17900.29 7410.61 29440.79 00:11:02.410 =================================================================================================================== 00:11:02.410 Total : 7085.34 27.68 0.00 0.00 17900.29 7410.61 29440.79 00:11:02.410 00:11:02.410 Latency(us) 00:11:02.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.410 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:02.410 Nvme1n1 : 1.01 4790.45 18.71 0.00 0.00 26529.45 11292.36 46177.67 00:11:02.410 =================================================================================================================== 00:11:02.410 Total : 4790.45 18.71 0.00 0.00 26529.45 11292.36 46177.67 00:11:02.410 00:11:02.411 Latency(us) 00:11:02.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.411 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:02.411 Nvme1n1 : 1.01 7249.07 28.32 0.00 0.00 17600.05 4915.20 35086.97 00:11:02.411 =================================================================================================================== 00:11:02.411 Total : 7249.07 28.32 0.00 0.00 17600.05 4915.20 35086.97 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 288738 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 288740 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 288743 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.982 rmmod nvme_tcp 00:11:02.982 rmmod nvme_fabrics 00:11:02.982 rmmod nvme_keyring 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 288683 ']' 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 288683 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 288683 ']' 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 288683 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 288683 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 288683' 00:11:02.982 killing process with pid 288683 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 288683 00:11:02.982 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 288683 00:11:03.243 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:03.243 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:03.243 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:03.243 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.243 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:03.243 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.243 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.243 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.156 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:05.416 00:11:05.416 real 0m14.299s 00:11:05.416 user 0m21.533s 00:11:05.416 sys 0m7.975s 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:05.416 ************************************ 00:11:05.416 END TEST nvmf_bdev_io_wait 00:11:05.416 ************************************ 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:05.416 ************************************ 00:11:05.416 START TEST nvmf_queue_depth 00:11:05.416 ************************************ 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:05.416 * Looking for test storage... 00:11:05.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:05.416 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:11:05.417 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:13.552 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:13.552 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:13.552 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:13.552 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.552 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:13.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.741 ms 00:11:13.813 00:11:13.813 --- 10.0.0.2 ping statistics --- 00:11:13.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.813 rtt min/avg/max/mdev = 0.741/0.741/0.741/0.000 ms 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:11:13.813 00:11:13.813 --- 10.0.0.1 ping statistics --- 00:11:13.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.813 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=293558 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 293558 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 293558 ']' 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.813 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:14.074 [2024-07-25 12:23:47.274054] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:11:14.074 [2024-07-25 12:23:47.274117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.074 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.074 [2024-07-25 12:23:47.365985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.074 [2024-07-25 12:23:47.473953] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.074 [2024-07-25 12:23:47.474021] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.074 [2024-07-25 12:23:47.474037] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.074 [2024-07-25 12:23:47.474049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.074 [2024-07-25 12:23:47.474060] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.074 [2024-07-25 12:23:47.474101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.017 [2024-07-25 12:23:48.206461] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.017 Malloc0 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.017 [2024-07-25 12:23:48.277840] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=293872 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 293872 /var/tmp/bdevperf.sock 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 293872 ']' 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:15.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.017 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.017 [2024-07-25 12:23:48.341476] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:11:15.017 [2024-07-25 12:23:48.341541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293872 ] 00:11:15.017 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.017 [2024-07-25 12:23:48.427649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.278 [2024-07-25 12:23:48.519959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.850 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.850 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:15.850 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:15.850 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.850 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.110 NVMe0n1 00:11:16.110 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.110 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:16.110 Running I/O for 10 seconds... 00:11:26.192 00:11:26.192 Latency(us) 00:11:26.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.192 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:26.192 Verification LBA range: start 0x0 length 0x4000 00:11:26.192 NVMe0n1 : 10.12 7047.92 27.53 0.00 0.00 144531.30 23592.96 87515.77 00:11:26.192 =================================================================================================================== 00:11:26.192 Total : 7047.92 27.53 0.00 0.00 144531.30 23592.96 87515.77 00:11:26.192 0 00:11:26.192 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 293872 00:11:26.192 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 293872 ']' 00:11:26.192 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 293872 00:11:26.192 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:26.192 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:26.192 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 293872 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 293872' 00:11:26.453 killing process with pid 293872 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 293872 00:11:26.453 Received shutdown signal, test time was about 10.000000 seconds 00:11:26.453 00:11:26.453 Latency(us) 00:11:26.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.453 =================================================================================================================== 00:11:26.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 293872 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.453 rmmod nvme_tcp 00:11:26.453 rmmod nvme_fabrics 00:11:26.453 rmmod nvme_keyring 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 293558 ']' 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 293558 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 293558 ']' 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 293558 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 293558 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 293558' 00:11:26.453 killing process with pid 293558 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 293558 00:11:26.453 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 293558 00:11:26.714 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.714 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:26.714 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:26.714 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.714 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.714 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.714 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.714 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.256 00:11:29.256 real 0m23.489s 00:11:29.256 user 0m26.455s 00:11:29.256 sys 0m7.490s 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:29.256 ************************************ 00:11:29.256 END TEST nvmf_queue_depth 00:11:29.256 ************************************ 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:29.256 ************************************ 00:11:29.256 START TEST nvmf_target_multipath 00:11:29.256 ************************************ 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:29.256 * Looking for test storage... 00:11:29.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.256 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.395 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:37.396 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:37.396 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:37.396 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:37.396 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.396 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:37.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:11:37.657 00:11:37.657 --- 10.0.0.2 ping statistics --- 00:11:37.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.657 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:11:37.657 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:11:37.657 00:11:37.657 --- 10.0.0.1 ping statistics --- 00:11:37.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.657 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:37.657 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:37.658 only one NIC for nvmf test 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:37.658 rmmod nvme_tcp 00:11:37.658 rmmod nvme_fabrics 00:11:37.658 rmmod nvme_keyring 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.658 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.571 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:39.571 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:39.571 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:39.571 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:39.571 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:39.571 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:39.571 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:39.571 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.571 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.832 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.832 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:39.832 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:39.832 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:39.832 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:39.832 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:39.832 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:39.832 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:39.832 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:39.832 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.832 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.832 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.832 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:39.832 00:11:39.832 real 0m10.786s 00:11:39.832 user 0m2.413s 00:11:39.832 sys 0m6.293s 00:11:39.832 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.832 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:39.832 ************************************ 00:11:39.832 END TEST nvmf_target_multipath 00:11:39.832 ************************************ 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:39.833 ************************************ 00:11:39.833 START TEST nvmf_zcopy 00:11:39.833 ************************************ 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:39.833 * Looking for test storage... 00:11:39.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.833 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:49.838 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:49.838 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.838 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:49.839 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:49.839 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:49.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:11:49.839 00:11:49.839 --- 10.0.0.2 ping statistics --- 00:11:49.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.839 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:11:49.839 00:11:49.839 --- 10.0.0.1 ping statistics --- 00:11:49.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.839 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=305253 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 305253 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 305253 ']' 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:49.839 12:24:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.839 [2024-07-25 12:24:21.874641] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:11:49.839 [2024-07-25 12:24:21.874701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.839 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.839 [2024-07-25 12:24:21.963186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.839 [2024-07-25 12:24:22.067134] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.839 [2024-07-25 12:24:22.067197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.839 [2024-07-25 12:24:22.067213] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.839 [2024-07-25 12:24:22.067225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.839 [2024-07-25 12:24:22.067235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.839 [2024-07-25 12:24:22.067282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.839 [2024-07-25 12:24:22.805500] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.839 [2024-07-25 12:24:22.829726] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.839 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.840 malloc0 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:49.840 { 00:11:49.840 "params": { 00:11:49.840 "name": "Nvme$subsystem", 00:11:49.840 "trtype": "$TEST_TRANSPORT", 00:11:49.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:49.840 "adrfam": "ipv4", 00:11:49.840 "trsvcid": "$NVMF_PORT", 00:11:49.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:49.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:49.840 "hdgst": ${hdgst:-false}, 00:11:49.840 "ddgst": ${ddgst:-false} 00:11:49.840 }, 00:11:49.840 "method": "bdev_nvme_attach_controller" 00:11:49.840 } 00:11:49.840 EOF 00:11:49.840 )") 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:49.840 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:49.840 "params": { 00:11:49.840 "name": "Nvme1", 00:11:49.840 "trtype": "tcp", 00:11:49.840 "traddr": "10.0.0.2", 00:11:49.840 "adrfam": "ipv4", 00:11:49.840 "trsvcid": "4420", 00:11:49.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:49.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:49.840 "hdgst": false, 00:11:49.840 "ddgst": false 00:11:49.840 }, 00:11:49.840 "method": "bdev_nvme_attach_controller" 00:11:49.840 }' 00:11:49.840 [2024-07-25 12:24:22.948650] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:11:49.840 [2024-07-25 12:24:22.948716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid305303 ] 00:11:49.840 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.840 [2024-07-25 12:24:23.033667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.840 [2024-07-25 12:24:23.130086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.100 Running I/O for 10 seconds... 00:12:00.096 00:12:00.096 Latency(us) 00:12:00.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.096 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:00.096 Verification LBA range: start 0x0 length 0x1000 00:12:00.096 Nvme1n1 : 10.01 4863.80 38.00 0.00 0.00 26253.52 4537.11 34078.72 00:12:00.096 =================================================================================================================== 00:12:00.096 Total : 4863.80 38.00 0.00 0.00 26253.52 4537.11 34078.72 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=307104 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:00.358 { 00:12:00.358 "params": { 00:12:00.358 "name": "Nvme$subsystem", 00:12:00.358 "trtype": "$TEST_TRANSPORT", 00:12:00.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:00.358 "adrfam": "ipv4", 00:12:00.358 "trsvcid": "$NVMF_PORT", 00:12:00.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:00.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:00.358 "hdgst": ${hdgst:-false}, 00:12:00.358 "ddgst": ${ddgst:-false} 00:12:00.358 }, 00:12:00.358 "method": "bdev_nvme_attach_controller" 00:12:00.358 } 00:12:00.358 EOF 00:12:00.358 )") 00:12:00.358 [2024-07-25 12:24:33.582597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.358 [2024-07-25 12:24:33.582637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:00.358 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:00.358 "params": { 00:12:00.358 "name": "Nvme1", 00:12:00.358 "trtype": "tcp", 00:12:00.358 "traddr": "10.0.0.2", 00:12:00.358 "adrfam": "ipv4", 00:12:00.358 "trsvcid": "4420", 00:12:00.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:00.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:00.358 "hdgst": false, 00:12:00.358 "ddgst": false 00:12:00.358 }, 00:12:00.358 "method": "bdev_nvme_attach_controller" 00:12:00.358 }' 00:12:00.358 [2024-07-25 12:24:33.594605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.358 [2024-07-25 12:24:33.594628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.358 [2024-07-25 12:24:33.606626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.358 [2024-07-25 12:24:33.606642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.358 [2024-07-25 12:24:33.618652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.358 [2024-07-25 12:24:33.618668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.358 [2024-07-25 12:24:33.624314] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:12:00.358 [2024-07-25 12:24:33.624366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307104 ] 00:12:00.358 [2024-07-25 12:24:33.630696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.358 [2024-07-25 12:24:33.630725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.358 [2024-07-25 12:24:33.642719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.358 [2024-07-25 12:24:33.642735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.358 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.359 [2024-07-25 12:24:33.654749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.359 [2024-07-25 12:24:33.654765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.359 [2024-07-25 12:24:33.666783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.359 [2024-07-25 12:24:33.666799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.359 [2024-07-25 12:24:33.678815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.359 [2024-07-25 12:24:33.678831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.359 [2024-07-25 12:24:33.690848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.359 [2024-07-25 12:24:33.690864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.359 [2024-07-25 12:24:33.702879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.359 [2024-07-25 12:24:33.702894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.359 [2024-07-25 12:24:33.704069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.359 [2024-07-25 12:24:33.714915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.359 [2024-07-25 12:24:33.714933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.359 [2024-07-25 12:24:33.726950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.359 [2024-07-25 12:24:33.726968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.359 [2024-07-25 12:24:33.738990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.359 [2024-07-25 12:24:33.739010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.359 [2024-07-25 12:24:33.751020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.359 [2024-07-25 12:24:33.751038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.359 [2024-07-25 12:24:33.763053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.359 [2024-07-25 12:24:33.763069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.359 [2024-07-25 12:24:33.770902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.359 [2024-07-25 12:24:33.775090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.359 [2024-07-25 12:24:33.775107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.787125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.787146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.799160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.799180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.811190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.811206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.823224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.823240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.835257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.835273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.847312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.847337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.859358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.859377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.871381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.871401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.883414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.883430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.895449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.895464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.907484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.907501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.919522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.919542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.931590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.931612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.944410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.944435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 Running I/O for 5 seconds... 00:12:00.619 [2024-07-25 12:24:33.955654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.955675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.975016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.975042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:33.991348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:33.991373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:34.010426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:34.010452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.619 [2024-07-25 12:24:34.029722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.619 [2024-07-25 12:24:34.029748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.047206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.047232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.064001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.064026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.082473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.082498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.101024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.101048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.118933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.118957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.136661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.136685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.154892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.154917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.172605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.172630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.190304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.190329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.209246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.209271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.228322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.228347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.245648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.245674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.263504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.263529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.879 [2024-07-25 12:24:34.281153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.879 [2024-07-25 12:24:34.281178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.299178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.299203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.317117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.317141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.334604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.334630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.351731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.351756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.369600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.369626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.387675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.387700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.405452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.405476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.424232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.424257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.441120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.441145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.458712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.458744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.476090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.476114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.493703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.493728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.511329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.511354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.529351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.529376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.139 [2024-07-25 12:24:34.547341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.139 [2024-07-25 12:24:34.547366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.566191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.566215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.584105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.584130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.602066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.602092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.619764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.619789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.637430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.637456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.655181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.655206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.672386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.672410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.689418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.689443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.706999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.707024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.724558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.724582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.742592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.742618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.760066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.760091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.777803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.777828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.795703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.795736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.399 [2024-07-25 12:24:34.814207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.399 [2024-07-25 12:24:34.814232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.659 [2024-07-25 12:24:34.833246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.659 [2024-07-25 12:24:34.833271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.659 [2024-07-25 12:24:34.851187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.659 [2024-07-25 12:24:34.851212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.659 [2024-07-25 12:24:34.869078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.659 [2024-07-25 12:24:34.869103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.659 [2024-07-25 12:24:34.886834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.659 [2024-07-25 12:24:34.886860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.659 [2024-07-25 12:24:34.904584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.659 [2024-07-25 12:24:34.904609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.659 [2024-07-25 12:24:34.922352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.659 [2024-07-25 12:24:34.922376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.659 [2024-07-25 12:24:34.940968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.659 [2024-07-25 12:24:34.940993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.659 [2024-07-25 12:24:34.958530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.659 [2024-07-25 12:24:34.958561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.659 [2024-07-25 12:24:34.976440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.659 [2024-07-25 12:24:34.976465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.660 [2024-07-25 12:24:34.995184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.660 [2024-07-25 12:24:34.995209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.660 [2024-07-25 12:24:35.013492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.660 [2024-07-25 12:24:35.013518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.660 [2024-07-25 12:24:35.031087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.660 [2024-07-25 12:24:35.031112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.660 [2024-07-25 12:24:35.050039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.660 [2024-07-25 12:24:35.050063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.660 [2024-07-25 12:24:35.067329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.660 [2024-07-25 12:24:35.067353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.085025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.085050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.103076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.103100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.122022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.122047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.139719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.139749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.157233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.157258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.174859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.174884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.192778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.192802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.211482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.211506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.229820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.229845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.247972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.247996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.265689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.265713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.283756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.283781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.301495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.301519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.920 [2024-07-25 12:24:35.319000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.920 [2024-07-25 12:24:35.319024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.921 [2024-07-25 12:24:35.337023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.921 [2024-07-25 12:24:35.337048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.355333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.355358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.373195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.373220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.391155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.391180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.409361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.409386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.427029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.427053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.444812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.444836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.462442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.462466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.481311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.481341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.498233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.498257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.516790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.516815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.534007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.534031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.552051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.552075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.569508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.569532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.181 [2024-07-25 12:24:35.587268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.181 [2024-07-25 12:24:35.587294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.604962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.604988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.623503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.623528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.641608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.641633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.659495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.659519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.677056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.677081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.695968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.695993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.713645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.713670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.731370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.731395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.748692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.748718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.767310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.767335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.785301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.785325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.802684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.802709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.820487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.820511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.838462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.838486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.441 [2024-07-25 12:24:35.856515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.441 [2024-07-25 12:24:35.856540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.701 [2024-07-25 12:24:35.874356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.701 [2024-07-25 12:24:35.874381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.701 [2024-07-25 12:24:35.892135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.701 [2024-07-25 12:24:35.892160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.701 [2024-07-25 12:24:35.909778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.701 [2024-07-25 12:24:35.909802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.701 [2024-07-25 12:24:35.928282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.701 [2024-07-25 12:24:35.928306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.701 [2024-07-25 12:24:35.946020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.701 [2024-07-25 12:24:35.946045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.701 [2024-07-25 12:24:35.963793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.701 [2024-07-25 12:24:35.963818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.701 [2024-07-25 12:24:35.982119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.701 [2024-07-25 12:24:35.982144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.702 [2024-07-25 12:24:36.001206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.702 [2024-07-25 12:24:36.001232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.702 [2024-07-25 12:24:36.018377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.702 [2024-07-25 12:24:36.018402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.702 [2024-07-25 12:24:36.036032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.702 [2024-07-25 12:24:36.036057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.702 [2024-07-25 12:24:36.054007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.702 [2024-07-25 12:24:36.054032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.702 [2024-07-25 12:24:36.072968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.702 [2024-07-25 12:24:36.072993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.702 [2024-07-25 12:24:36.090828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.702 [2024-07-25 12:24:36.090853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.702 [2024-07-25 12:24:36.107500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.702 [2024-07-25 12:24:36.107525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.124973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.124998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.142952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.142977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.160655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.160680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.177642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.177667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.196181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.196206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.213851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.213875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.231913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.231937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.249501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.249526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.268021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.268046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.286104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.286130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.304384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.304409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.322031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.322056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.340866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.340890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.358829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.358853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.962 [2024-07-25 12:24:36.377012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.962 [2024-07-25 12:24:36.377037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.395679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.395705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.413829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.413854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.431828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.431852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.448677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.448702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.466007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.466032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.483503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.483528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.501623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.501647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.519456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.519481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.538325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.538350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.555082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.555107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.566577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.566601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.582657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.582682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.600325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.600350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.618056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.618081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.636099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.636123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.285 [2024-07-25 12:24:36.653956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.285 [2024-07-25 12:24:36.653980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.671523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.671556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.689482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.689507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.707538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.707569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.725151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.725176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.743114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.743138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.761989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.762013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.780182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.780206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.797811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.797836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.815508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.815532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.834272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.834297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.851621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.851645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.869170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.869193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.887426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.887451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.905234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.905259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.923961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.923985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.940997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.941021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.547 [2024-07-25 12:24:36.959754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.547 [2024-07-25 12:24:36.959779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:36.977475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:36.977499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:36.996446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:36.996471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.014436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.014461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.031919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.031943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.049448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.049473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.066977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.067002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.084686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.084711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.101727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.101751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.120485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.120509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.138084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.138108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.155848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.155878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.174784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.174809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.192181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.192206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.808 [2024-07-25 12:24:37.210473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.808 [2024-07-25 12:24:37.210498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.068 [2024-07-25 12:24:37.228452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.068 [2024-07-25 12:24:37.228476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.068 [2024-07-25 12:24:37.246222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.246246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.265234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.265259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.282215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.282239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.300138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.300162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.317821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.317845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.334970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.334994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.352460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.352484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.370592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.370616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.388358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.388382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.406278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.406302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.424034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.424059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.442358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.442383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.459958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.459982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.069 [2024-07-25 12:24:37.477708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.069 [2024-07-25 12:24:37.477733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.495575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.495605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.513955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.513981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.532844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.532868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.550958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.550983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.568021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.568045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.585623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.585649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.603452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.603477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.622301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.622329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.639961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.639987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.658122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.658148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.675213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.675237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.693845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.693869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.711278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.711302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.728783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.728808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.329 [2024-07-25 12:24:37.746538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.329 [2024-07-25 12:24:37.746568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.765102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.765127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.782749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.782774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.801466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.801491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.819447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.819473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.837323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.837353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.854756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.854780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.872670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.872695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.890456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.890481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.908321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.908346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.925893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.925919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.943319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.943343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.962235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.962260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.979424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.979449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.589 [2024-07-25 12:24:37.997111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.589 [2024-07-25 12:24:37.997135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.014614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.014640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.032378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.032403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.050110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.050135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.067658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.067684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.085704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.085730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.103840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.103865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.121395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.121419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.139024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.139048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.158093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.158118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.175846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.175877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.193354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.193379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.211226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.211251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.228097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.228122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.246644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.246669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.850 [2024-07-25 12:24:38.264741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.850 [2024-07-25 12:24:38.264767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.282666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.282691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.300249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.300274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.317902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.317928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.336884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.336909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.354486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.354512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.371763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.371788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.390190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.390214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.407539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.407568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.424790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.424814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.443151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.443175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.460555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.460579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.479367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.479392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.497248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.497272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.111 [2024-07-25 12:24:38.514050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.111 [2024-07-25 12:24:38.514074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.532574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.532598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.550332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.550356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.569356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.569379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.587118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.587142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.605899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.605924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.623611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.623639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.641579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.641605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.659450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.659474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.677991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.678015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.695826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.695851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.714441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.714466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.732271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.732295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.751093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.751118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.768355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.768379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.372 [2024-07-25 12:24:38.787322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.372 [2024-07-25 12:24:38.787346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:38.804947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.804972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:38.822684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.822709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:38.840002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.840027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:38.857687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.857712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:38.876731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.876756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:38.894613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.894637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:38.913121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.913145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:38.931106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.931131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:38.948975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.948999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:38.966853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.966878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:38.979553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.979577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 00:12:05.632 Latency(us) 00:12:05.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.632 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:05.632 Nvme1n1 : 5.01 9631.47 75.25 0.00 0.00 13275.20 5772.21 27021.00 00:12:05.632 =================================================================================================================== 00:12:05.632 Total : 9631.47 75.25 0.00 0.00 13275.20 5772.21 27021.00 00:12:05.632 [2024-07-25 12:24:38.991581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:38.991604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:39.003626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:39.003648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:39.015653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:39.015672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:39.027680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:39.027699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.632 [2024-07-25 12:24:39.039716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.632 [2024-07-25 12:24:39.039735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.633 [2024-07-25 12:24:39.051748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.633 [2024-07-25 12:24:39.051764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.894 [2024-07-25 12:24:39.063785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.894 [2024-07-25 12:24:39.063804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.894 [2024-07-25 12:24:39.075821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.894 [2024-07-25 12:24:39.075840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.894 [2024-07-25 12:24:39.087857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.894 [2024-07-25 12:24:39.087878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.894 [2024-07-25 12:24:39.099887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.894 [2024-07-25 12:24:39.099904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (307104) - No such process 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 307104 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.895 delay0 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.895 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:05.895 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.895 [2024-07-25 12:24:39.220514] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:12.476 Initializing NVMe Controllers 00:12:12.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:12.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:12.476 Initialization complete. Launching workers. 00:12:12.476 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 9449 00:12:12.476 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9625, failed to submit 90 00:12:12.476 success 9534, unsuccess 91, failed 0 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:12.476 rmmod nvme_tcp 00:12:12.476 rmmod nvme_fabrics 00:12:12.476 rmmod nvme_keyring 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 305253 ']' 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 305253 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 305253 ']' 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 305253 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 305253 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 305253' 00:12:12.476 killing process with pid 305253 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 305253 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 305253 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.476 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.469 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:14.469 00:12:14.469 real 0m34.690s 00:12:14.469 user 0m45.059s 00:12:14.469 sys 0m11.235s 00:12:14.469 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.469 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:14.469 ************************************ 00:12:14.469 END TEST nvmf_zcopy 00:12:14.469 ************************************ 00:12:14.469 12:24:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:14.469 12:24:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:14.469 12:24:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:14.469 12:24:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.469 12:24:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:14.469 ************************************ 00:12:14.469 START TEST nvmf_nmic 00:12:14.469 ************************************ 00:12:14.469 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:14.752 * Looking for test storage... 00:12:14.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.752 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.752 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.752 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.752 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.752 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:14.752 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.752 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:14.752 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.753 12:24:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:22.897 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:22.898 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:22.898 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:22.898 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:22.898 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.898 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:23.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:12:23.159 00:12:23.159 --- 10.0.0.2 ping statistics --- 00:12:23.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.159 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:12:23.159 00:12:23.159 --- 10.0.0.1 ping statistics --- 00:12:23.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.159 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=313564 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 313564 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 313564 ']' 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:23.159 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:23.421 [2024-07-25 12:24:56.587780] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:12:23.421 [2024-07-25 12:24:56.587851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.421 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.421 [2024-07-25 12:24:56.681377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.421 [2024-07-25 12:24:56.776461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.421 [2024-07-25 12:24:56.776518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.421 [2024-07-25 12:24:56.776527] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.421 [2024-07-25 12:24:56.776533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.421 [2024-07-25 12:24:56.776539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.421 [2024-07-25 12:24:56.776619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.421 [2024-07-25 12:24:56.776835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.421 [2024-07-25 12:24:56.776986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.421 [2024-07-25 12:24:56.776989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 [2024-07-25 12:24:57.514723] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 Malloc0 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 [2024-07-25 12:24:57.584182] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:24.366 test case1: single bdev can't be used in multiple subsystems 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 [2024-07-25 12:24:57.620032] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:24.366 [2024-07-25 12:24:57.620057] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:24.366 [2024-07-25 12:24:57.620065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.366 request: 00:12:24.366 { 00:12:24.366 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:24.366 "namespace": { 00:12:24.366 "bdev_name": "Malloc0", 00:12:24.366 "no_auto_visible": false 00:12:24.366 }, 00:12:24.366 "method": "nvmf_subsystem_add_ns", 00:12:24.366 "req_id": 1 00:12:24.366 } 00:12:24.366 Got JSON-RPC error response 00:12:24.366 response: 00:12:24.366 { 00:12:24.366 "code": -32602, 00:12:24.366 "message": "Invalid parameters" 00:12:24.366 } 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:24.366 Adding namespace failed - expected result. 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:24.366 test case2: host connect to nvmf target in multiple paths 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 [2024-07-25 12:24:57.632181] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.366 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.282 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:27.666 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.666 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:27.666 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.666 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:27.666 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:29.573 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:29.573 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:29.573 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.573 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:29.573 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.573 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:29.573 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:29.573 [global] 00:12:29.573 thread=1 00:12:29.573 invalidate=1 00:12:29.573 rw=write 00:12:29.573 time_based=1 00:12:29.573 runtime=1 00:12:29.573 ioengine=libaio 00:12:29.573 direct=1 00:12:29.573 bs=4096 00:12:29.573 iodepth=1 00:12:29.573 norandommap=0 00:12:29.573 numjobs=1 00:12:29.573 00:12:29.573 verify_dump=1 00:12:29.573 verify_backlog=512 00:12:29.573 verify_state_save=0 00:12:29.573 do_verify=1 00:12:29.573 verify=crc32c-intel 00:12:29.573 [job0] 00:12:29.573 filename=/dev/nvme0n1 00:12:29.573 Could not set queue depth (nvme0n1) 00:12:29.833 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.833 fio-3.35 00:12:29.833 Starting 1 thread 00:12:31.215 00:12:31.215 job0: (groupid=0, jobs=1): err= 0: pid=314842: Thu Jul 25 12:25:04 2024 00:12:31.215 read: IOPS=14, BW=58.0KiB/s (59.4kB/s)(60.0KiB/1034msec) 00:12:31.215 slat (nsec): min=25519, max=26197, avg=25765.33, stdev=190.98 00:12:31.215 clat (usec): min=1353, max=42437, avg=39259.06, stdev=10488.27 00:12:31.215 lat (usec): min=1378, max=42463, avg=39284.82, stdev=10488.24 00:12:31.215 clat percentiles (usec): 00:12:31.215 | 1.00th=[ 1352], 5.00th=[ 1352], 10.00th=[41681], 20.00th=[41681], 00:12:31.215 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:12:31.215 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:31.215 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:31.216 | 99.99th=[42206] 00:12:31.216 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:12:31.216 slat (nsec): min=9493, max=63565, avg=32528.34, stdev=6849.35 00:12:31.216 clat (usec): min=336, max=1016, avg=828.45, stdev=101.54 00:12:31.216 lat (usec): min=347, max=1066, avg=860.97, stdev=104.39 00:12:31.216 clat percentiles (usec): 00:12:31.216 | 1.00th=[ 424], 5.00th=[ 668], 10.00th=[ 742], 20.00th=[ 783], 00:12:31.216 | 30.00th=[ 799], 40.00th=[ 816], 50.00th=[ 832], 60.00th=[ 881], 00:12:31.216 | 70.00th=[ 898], 80.00th=[ 914], 90.00th=[ 930], 95.00th=[ 938], 00:12:31.216 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 1020], 99.95th=[ 1020], 00:12:31.216 | 99.99th=[ 1020] 00:12:31.216 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:12:31.216 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:31.216 lat (usec) : 500=2.47%, 750=8.54%, 1000=85.96% 00:12:31.216 lat (msec) : 2=0.38%, 50=2.66% 00:12:31.216 cpu : usr=1.36%, sys=1.74%, ctx=527, majf=0, minf=1 00:12:31.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.216 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.216 00:12:31.216 Run status group 0 (all jobs): 00:12:31.216 READ: bw=58.0KiB/s (59.4kB/s), 58.0KiB/s-58.0KiB/s (59.4kB/s-59.4kB/s), io=60.0KiB (61.4kB), run=1034-1034msec 00:12:31.216 WRITE: bw=1981KiB/s (2028kB/s), 1981KiB/s-1981KiB/s (2028kB/s-2028kB/s), io=2048KiB (2097kB), run=1034-1034msec 00:12:31.216 00:12:31.216 Disk stats (read/write): 00:12:31.216 nvme0n1: ios=61/512, merge=0/0, ticks=489/350, in_queue=839, util=93.99% 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.216 rmmod nvme_tcp 00:12:31.216 rmmod nvme_fabrics 00:12:31.216 rmmod nvme_keyring 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 313564 ']' 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 313564 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 313564 ']' 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 313564 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 313564 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 313564' 00:12:31.216 killing process with pid 313564 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 313564 00:12:31.216 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 313564 00:12:31.477 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.477 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.477 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.477 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.477 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.477 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.477 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.477 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.386 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.386 00:12:33.386 real 0m18.879s 00:12:33.386 user 0m42.506s 00:12:33.386 sys 0m7.195s 00:12:33.386 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:33.386 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:33.386 ************************************ 00:12:33.386 END TEST nvmf_nmic 00:12:33.386 ************************************ 00:12:33.386 12:25:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:33.386 12:25:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:33.386 12:25:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:33.386 12:25:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.386 12:25:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:33.646 ************************************ 00:12:33.646 START TEST nvmf_fio_target 00:12:33.646 ************************************ 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:33.646 * Looking for test storage... 00:12:33.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.646 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.647 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:41.788 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:41.788 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:41.788 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:41.788 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.788 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:42.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:12:42.050 00:12:42.050 --- 10.0.0.2 ping statistics --- 00:12:42.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.050 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:12:42.050 00:12:42.050 --- 10.0.0.1 ping statistics --- 00:12:42.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.050 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=319481 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 319481 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 319481 ']' 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.050 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.311 [2024-07-25 12:25:15.503862] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:12:42.311 [2024-07-25 12:25:15.503925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.311 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.311 [2024-07-25 12:25:15.604733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.311 [2024-07-25 12:25:15.702709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.311 [2024-07-25 12:25:15.702768] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.311 [2024-07-25 12:25:15.702776] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.311 [2024-07-25 12:25:15.702782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.311 [2024-07-25 12:25:15.702788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.311 [2024-07-25 12:25:15.702862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.311 [2024-07-25 12:25:15.702989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.311 [2024-07-25 12:25:15.703140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.311 [2024-07-25 12:25:15.703141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.254 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.254 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:12:43.254 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.254 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.254 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.254 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.254 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:43.254 [2024-07-25 12:25:16.543947] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.254 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:43.516 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:43.516 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:43.776 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:43.776 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:44.037 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:44.037 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:44.299 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:44.299 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:44.299 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:44.559 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:44.559 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:44.820 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:44.820 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:45.082 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:45.082 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:45.344 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.605 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:45.605 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.866 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:45.866 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.127 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.127 [2024-07-25 12:25:19.490539] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.127 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:46.388 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:46.648 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.030 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:48.030 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:48.030 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.030 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:48.030 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:48.030 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:50.567 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:50.567 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:50.567 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.567 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:50.567 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.567 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:50.567 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:50.567 [global] 00:12:50.567 thread=1 00:12:50.567 invalidate=1 00:12:50.567 rw=write 00:12:50.567 time_based=1 00:12:50.567 runtime=1 00:12:50.567 ioengine=libaio 00:12:50.567 direct=1 00:12:50.567 bs=4096 00:12:50.567 iodepth=1 00:12:50.568 norandommap=0 00:12:50.568 numjobs=1 00:12:50.568 00:12:50.568 verify_dump=1 00:12:50.568 verify_backlog=512 00:12:50.568 verify_state_save=0 00:12:50.568 do_verify=1 00:12:50.568 verify=crc32c-intel 00:12:50.568 [job0] 00:12:50.568 filename=/dev/nvme0n1 00:12:50.568 [job1] 00:12:50.568 filename=/dev/nvme0n2 00:12:50.568 [job2] 00:12:50.568 filename=/dev/nvme0n3 00:12:50.568 [job3] 00:12:50.568 filename=/dev/nvme0n4 00:12:50.568 Could not set queue depth (nvme0n1) 00:12:50.568 Could not set queue depth (nvme0n2) 00:12:50.568 Could not set queue depth (nvme0n3) 00:12:50.568 Could not set queue depth (nvme0n4) 00:12:50.568 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:50.568 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:50.568 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:50.568 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:50.568 fio-3.35 00:12:50.568 Starting 4 threads 00:12:51.964 00:12:51.964 job0: (groupid=0, jobs=1): err= 0: pid=321095: Thu Jul 25 12:25:25 2024 00:12:51.964 read: IOPS=22, BW=90.3KiB/s (92.5kB/s)(92.0KiB/1019msec) 00:12:51.964 slat (nsec): min=25588, max=30698, avg=26102.43, stdev=1018.93 00:12:51.964 clat (usec): min=872, max=42649, avg=29239.79, stdev=19115.42 00:12:51.964 lat (usec): min=898, max=42675, avg=29265.89, stdev=19115.03 00:12:51.964 clat percentiles (usec): 00:12:51.964 | 1.00th=[ 873], 5.00th=[ 963], 10.00th=[ 971], 20.00th=[ 1029], 00:12:51.964 | 30.00th=[ 1029], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:51.964 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:12:51.964 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:12:51.964 | 99.99th=[42730] 00:12:51.964 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:12:51.964 slat (nsec): min=9768, max=68400, avg=28956.04, stdev=10498.22 00:12:51.964 clat (usec): min=347, max=989, avg=631.70, stdev=114.05 00:12:51.964 lat (usec): min=358, max=1023, avg=660.66, stdev=117.77 00:12:51.964 clat percentiles (usec): 00:12:51.964 | 1.00th=[ 367], 5.00th=[ 388], 10.00th=[ 469], 20.00th=[ 537], 00:12:51.964 | 30.00th=[ 586], 40.00th=[ 644], 50.00th=[ 660], 60.00th=[ 676], 00:12:51.964 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 799], 00:12:51.964 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 988], 99.95th=[ 988], 00:12:51.964 | 99.99th=[ 988] 00:12:51.964 bw ( KiB/s): min= 4096, max= 4096, per=45.54%, avg=4096.00, stdev= 0.00, samples=1 00:12:51.964 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:51.964 lat (usec) : 500=15.33%, 750=70.65%, 1000=10.47% 00:12:51.964 lat (msec) : 2=0.56%, 50=2.99% 00:12:51.964 cpu : usr=0.88%, sys=1.28%, ctx=537, majf=0, minf=1 00:12:51.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.964 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:51.964 job1: (groupid=0, jobs=1): err= 0: pid=321096: Thu Jul 25 12:25:25 2024 00:12:51.964 read: IOPS=203, BW=814KiB/s (834kB/s)(816KiB/1002msec) 00:12:51.964 slat (nsec): min=6250, max=55021, avg=24188.80, stdev=4603.54 00:12:51.964 clat (usec): min=518, max=41599, avg=3635.80, stdev=10122.70 00:12:51.964 lat (usec): min=541, max=41610, avg=3659.99, stdev=10122.38 00:12:51.964 clat percentiles (usec): 00:12:51.964 | 1.00th=[ 570], 5.00th=[ 676], 10.00th=[ 750], 20.00th=[ 807], 00:12:51.964 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[ 930], 60.00th=[ 947], 00:12:51.964 | 70.00th=[ 979], 80.00th=[ 1020], 90.00th=[ 1074], 95.00th=[40633], 00:12:51.964 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:12:51.964 | 99.99th=[41681] 00:12:51.964 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:12:51.964 slat (nsec): min=9008, max=67354, avg=26696.62, stdev=9476.53 00:12:51.964 clat (usec): min=191, max=881, avg=461.28, stdev=141.43 00:12:51.964 lat (usec): min=207, max=912, avg=487.98, stdev=144.11 00:12:51.964 clat percentiles (usec): 00:12:51.964 | 1.00th=[ 215], 5.00th=[ 241], 10.00th=[ 281], 20.00th=[ 326], 00:12:51.964 | 30.00th=[ 375], 40.00th=[ 424], 50.00th=[ 457], 60.00th=[ 486], 00:12:51.964 | 70.00th=[ 537], 80.00th=[ 586], 90.00th=[ 652], 95.00th=[ 717], 00:12:51.964 | 99.00th=[ 775], 99.50th=[ 840], 99.90th=[ 881], 99.95th=[ 881], 00:12:51.964 | 99.99th=[ 881] 00:12:51.964 bw ( KiB/s): min= 4096, max= 4096, per=45.54%, avg=4096.00, stdev= 0.00, samples=1 00:12:51.964 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:51.964 lat (usec) : 250=5.17%, 500=39.80%, 750=27.23%, 1000=20.53% 00:12:51.964 lat (msec) : 2=5.31%, 50=1.96% 00:12:51.964 cpu : usr=1.30%, sys=1.50%, ctx=717, majf=0, minf=1 00:12:51.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.964 issued rwts: total=204,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:51.964 job2: (groupid=0, jobs=1): err= 0: pid=321097: Thu Jul 25 12:25:25 2024 00:12:51.964 read: IOPS=446, BW=1786KiB/s (1829kB/s)(1840KiB/1030msec) 00:12:51.964 slat (nsec): min=8470, max=57615, avg=24407.79, stdev=5021.07 00:12:51.964 clat (usec): min=848, max=41699, avg=1295.64, stdev=2659.40 00:12:51.964 lat (usec): min=862, max=41724, avg=1320.04, stdev=2659.42 00:12:51.964 clat percentiles (usec): 00:12:51.964 | 1.00th=[ 898], 5.00th=[ 988], 10.00th=[ 1012], 20.00th=[ 1057], 00:12:51.964 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:12:51.964 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:12:51.964 | 99.00th=[ 1319], 99.50th=[ 1385], 99.90th=[41681], 99.95th=[41681], 00:12:51.964 | 99.99th=[41681] 00:12:51.964 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:12:51.964 slat (nsec): min=4850, max=51911, avg=27199.19, stdev=8371.89 00:12:51.964 clat (usec): min=402, max=1232, avg=781.77, stdev=137.98 00:12:51.964 lat (usec): min=433, max=1263, avg=808.97, stdev=138.41 00:12:51.964 clat percentiles (usec): 00:12:51.964 | 1.00th=[ 474], 5.00th=[ 553], 10.00th=[ 619], 20.00th=[ 668], 00:12:51.964 | 30.00th=[ 701], 40.00th=[ 725], 50.00th=[ 766], 60.00th=[ 816], 00:12:51.964 | 70.00th=[ 865], 80.00th=[ 914], 90.00th=[ 971], 95.00th=[ 1012], 00:12:51.964 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1237], 99.95th=[ 1237], 00:12:51.964 | 99.99th=[ 1237] 00:12:51.964 bw ( KiB/s): min= 4096, max= 4096, per=45.54%, avg=4096.00, stdev= 0.00, samples=1 00:12:51.964 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:51.964 lat (usec) : 500=0.62%, 750=24.07%, 1000=28.40% 00:12:51.964 lat (msec) : 2=46.71%, 50=0.21% 00:12:51.964 cpu : usr=1.26%, sys=2.62%, ctx=972, majf=0, minf=1 00:12:51.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.964 issued rwts: total=460,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:51.964 job3: (groupid=0, jobs=1): err= 0: pid=321098: Thu Jul 25 12:25:25 2024 00:12:51.964 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:51.964 slat (nsec): min=7875, max=42071, avg=24647.70, stdev=1610.04 00:12:51.964 clat (usec): min=557, max=1382, avg=963.63, stdev=74.00 00:12:51.964 lat (usec): min=582, max=1406, avg=988.28, stdev=73.88 00:12:51.964 clat percentiles (usec): 00:12:51.964 | 1.00th=[ 775], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 914], 00:12:51.964 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 979], 00:12:51.964 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1074], 00:12:51.964 | 99.00th=[ 1156], 99.50th=[ 1237], 99.90th=[ 1385], 99.95th=[ 1385], 00:12:51.964 | 99.99th=[ 1385] 00:12:51.964 write: IOPS=779, BW=3117KiB/s (3192kB/s)(3120KiB/1001msec); 0 zone resets 00:12:51.964 slat (nsec): min=9173, max=62948, avg=29142.35, stdev=8284.64 00:12:51.964 clat (usec): min=239, max=1326, avg=591.53, stdev=160.49 00:12:51.964 lat (usec): min=250, max=1357, avg=620.67, stdev=161.71 00:12:51.964 clat percentiles (usec): 00:12:51.964 | 1.00th=[ 297], 5.00th=[ 371], 10.00th=[ 416], 20.00th=[ 453], 00:12:51.964 | 30.00th=[ 478], 40.00th=[ 519], 50.00th=[ 570], 60.00th=[ 619], 00:12:51.964 | 70.00th=[ 676], 80.00th=[ 734], 90.00th=[ 807], 95.00th=[ 881], 00:12:51.964 | 99.00th=[ 1020], 99.50th=[ 1045], 99.90th=[ 1319], 99.95th=[ 1319], 00:12:51.964 | 99.99th=[ 1319] 00:12:51.964 bw ( KiB/s): min= 4096, max= 4096, per=45.54%, avg=4096.00, stdev= 0.00, samples=1 00:12:51.964 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:51.964 lat (usec) : 250=0.08%, 500=21.75%, 750=28.10%, 1000=39.16% 00:12:51.964 lat (msec) : 2=10.91% 00:12:51.964 cpu : usr=2.10%, sys=3.50%, ctx=1292, majf=0, minf=1 00:12:51.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.965 issued rwts: total=512,780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.965 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:51.965 00:12:51.965 Run status group 0 (all jobs): 00:12:51.965 READ: bw=4656KiB/s (4768kB/s), 90.3KiB/s-2046KiB/s (92.5kB/s-2095kB/s), io=4796KiB (4911kB), run=1001-1030msec 00:12:51.965 WRITE: bw=8994KiB/s (9210kB/s), 1988KiB/s-3117KiB/s (2036kB/s-3192kB/s), io=9264KiB (9486kB), run=1001-1030msec 00:12:51.965 00:12:51.965 Disk stats (read/write): 00:12:51.965 nvme0n1: ios=45/512, merge=0/0, ticks=1418/317, in_queue=1735, util=97.09% 00:12:51.965 nvme0n2: ios=237/512, merge=0/0, ticks=618/224, in_queue=842, util=88.19% 00:12:51.965 nvme0n3: ios=391/512, merge=0/0, ticks=596/397, in_queue=993, util=97.80% 00:12:51.965 nvme0n4: ios=498/512, merge=0/0, ticks=492/316, in_queue=808, util=89.52% 00:12:51.965 12:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:51.965 [global] 00:12:51.965 thread=1 00:12:51.965 invalidate=1 00:12:51.965 rw=randwrite 00:12:51.965 time_based=1 00:12:51.965 runtime=1 00:12:51.965 ioengine=libaio 00:12:51.965 direct=1 00:12:51.965 bs=4096 00:12:51.965 iodepth=1 00:12:51.965 norandommap=0 00:12:51.965 numjobs=1 00:12:51.965 00:12:51.965 verify_dump=1 00:12:51.965 verify_backlog=512 00:12:51.965 verify_state_save=0 00:12:51.965 do_verify=1 00:12:51.965 verify=crc32c-intel 00:12:51.965 [job0] 00:12:51.965 filename=/dev/nvme0n1 00:12:51.965 [job1] 00:12:51.965 filename=/dev/nvme0n2 00:12:51.965 [job2] 00:12:51.965 filename=/dev/nvme0n3 00:12:51.965 [job3] 00:12:51.965 filename=/dev/nvme0n4 00:12:51.965 Could not set queue depth (nvme0n1) 00:12:51.965 Could not set queue depth (nvme0n2) 00:12:51.965 Could not set queue depth (nvme0n3) 00:12:51.965 Could not set queue depth (nvme0n4) 00:12:52.233 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:52.233 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:52.233 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:52.233 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:52.233 fio-3.35 00:12:52.233 Starting 4 threads 00:12:53.617 00:12:53.617 job0: (groupid=0, jobs=1): err= 0: pid=321575: Thu Jul 25 12:25:26 2024 00:12:53.617 read: IOPS=436, BW=1746KiB/s (1788kB/s)(1748KiB/1001msec) 00:12:53.617 slat (nsec): min=8315, max=53123, avg=25872.49, stdev=3941.46 00:12:53.617 clat (usec): min=832, max=1455, avg=1202.40, stdev=90.15 00:12:53.617 lat (usec): min=858, max=1494, avg=1228.28, stdev=90.69 00:12:53.617 clat percentiles (usec): 00:12:53.617 | 1.00th=[ 955], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1139], 00:12:53.617 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1221], 00:12:53.617 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1319], 95.00th=[ 1369], 00:12:53.617 | 99.00th=[ 1418], 99.50th=[ 1418], 99.90th=[ 1450], 99.95th=[ 1450], 00:12:53.617 | 99.99th=[ 1450] 00:12:53.617 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:53.617 slat (nsec): min=8934, max=62090, avg=31344.39, stdev=6334.42 00:12:53.617 clat (usec): min=495, max=1164, avg=857.91, stdev=108.76 00:12:53.617 lat (usec): min=527, max=1196, avg=889.26, stdev=109.91 00:12:53.617 clat percentiles (usec): 00:12:53.617 | 1.00th=[ 603], 5.00th=[ 668], 10.00th=[ 734], 20.00th=[ 766], 00:12:53.617 | 30.00th=[ 791], 40.00th=[ 840], 50.00th=[ 865], 60.00th=[ 889], 00:12:53.617 | 70.00th=[ 914], 80.00th=[ 947], 90.00th=[ 988], 95.00th=[ 1045], 00:12:53.617 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:12:53.617 | 99.99th=[ 1172] 00:12:53.618 bw ( KiB/s): min= 4096, max= 4096, per=47.95%, avg=4096.00, stdev= 0.00, samples=1 00:12:53.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:53.618 lat (usec) : 500=0.11%, 750=7.17%, 1000=42.99% 00:12:53.618 lat (msec) : 2=49.74% 00:12:53.618 cpu : usr=2.50%, sys=3.30%, ctx=949, majf=0, minf=1 00:12:53.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:53.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.618 issued rwts: total=437,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:53.618 job1: (groupid=0, jobs=1): err= 0: pid=321576: Thu Jul 25 12:25:26 2024 00:12:53.618 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:53.618 slat (nsec): min=23235, max=43332, avg=24083.35, stdev=2412.80 00:12:53.618 clat (usec): min=869, max=1491, avg=1181.27, stdev=122.76 00:12:53.618 lat (usec): min=893, max=1526, avg=1205.36, stdev=122.77 00:12:53.618 clat percentiles (usec): 00:12:53.618 | 1.00th=[ 914], 5.00th=[ 996], 10.00th=[ 1029], 20.00th=[ 1074], 00:12:53.618 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:12:53.618 | 70.00th=[ 1237], 80.00th=[ 1303], 90.00th=[ 1352], 95.00th=[ 1385], 00:12:53.618 | 99.00th=[ 1450], 99.50th=[ 1483], 99.90th=[ 1500], 99.95th=[ 1500], 00:12:53.618 | 99.99th=[ 1500] 00:12:53.618 write: IOPS=514, BW=2058KiB/s (2107kB/s)(2060KiB/1001msec); 0 zone resets 00:12:53.618 slat (nsec): min=8796, max=61381, avg=24973.28, stdev=9166.87 00:12:53.618 clat (usec): min=207, max=1067, avg=703.82, stdev=188.85 00:12:53.618 lat (usec): min=217, max=1096, avg=728.79, stdev=193.42 00:12:53.618 clat percentiles (usec): 00:12:53.618 | 1.00th=[ 237], 5.00th=[ 351], 10.00th=[ 437], 20.00th=[ 529], 00:12:53.618 | 30.00th=[ 627], 40.00th=[ 685], 50.00th=[ 742], 60.00th=[ 783], 00:12:53.618 | 70.00th=[ 824], 80.00th=[ 873], 90.00th=[ 922], 95.00th=[ 955], 00:12:53.618 | 99.00th=[ 1029], 99.50th=[ 1045], 99.90th=[ 1074], 99.95th=[ 1074], 00:12:53.618 | 99.99th=[ 1074] 00:12:53.618 bw ( KiB/s): min= 4096, max= 4096, per=47.95%, avg=4096.00, stdev= 0.00, samples=1 00:12:53.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:53.618 lat (usec) : 250=1.27%, 500=7.40%, 750=18.01%, 1000=25.51% 00:12:53.618 lat (msec) : 2=47.81% 00:12:53.618 cpu : usr=1.20%, sys=2.90%, ctx=1027, majf=0, minf=1 00:12:53.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:53.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.618 issued rwts: total=512,515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:53.618 job2: (groupid=0, jobs=1): err= 0: pid=321577: Thu Jul 25 12:25:26 2024 00:12:53.618 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:53.618 slat (nsec): min=23698, max=56189, avg=24566.33, stdev=2755.38 00:12:53.618 clat (usec): min=584, max=1424, avg=1075.72, stdev=134.54 00:12:53.618 lat (usec): min=608, max=1448, avg=1100.29, stdev=134.53 00:12:53.618 clat percentiles (usec): 00:12:53.618 | 1.00th=[ 750], 5.00th=[ 824], 10.00th=[ 906], 20.00th=[ 971], 00:12:53.618 | 30.00th=[ 1004], 40.00th=[ 1045], 50.00th=[ 1090], 60.00th=[ 1123], 00:12:53.618 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1287], 00:12:53.618 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1418], 99.95th=[ 1418], 00:12:53.618 | 99.99th=[ 1418] 00:12:53.618 write: IOPS=664, BW=2657KiB/s (2721kB/s)(2660KiB/1001msec); 0 zone resets 00:12:53.618 slat (nsec): min=9039, max=62105, avg=28466.17, stdev=7186.46 00:12:53.618 clat (usec): min=228, max=1062, avg=614.16, stdev=137.46 00:12:53.618 lat (usec): min=237, max=1091, avg=642.63, stdev=139.39 00:12:53.618 clat percentiles (usec): 00:12:53.618 | 1.00th=[ 262], 5.00th=[ 375], 10.00th=[ 445], 20.00th=[ 494], 00:12:53.618 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:12:53.618 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 832], 00:12:53.618 | 99.00th=[ 922], 99.50th=[ 963], 99.90th=[ 1057], 99.95th=[ 1057], 00:12:53.618 | 99.99th=[ 1057] 00:12:53.618 bw ( KiB/s): min= 4096, max= 4096, per=47.95%, avg=4096.00, stdev= 0.00, samples=1 00:12:53.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:53.618 lat (usec) : 250=0.25%, 500=11.72%, 750=36.53%, 1000=20.65% 00:12:53.618 lat (msec) : 2=30.84% 00:12:53.618 cpu : usr=1.90%, sys=3.20%, ctx=1177, majf=0, minf=1 00:12:53.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:53.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.618 issued rwts: total=512,665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:53.618 job3: (groupid=0, jobs=1): err= 0: pid=321578: Thu Jul 25 12:25:26 2024 00:12:53.618 read: IOPS=20, BW=81.4KiB/s (83.3kB/s)(84.0KiB/1032msec) 00:12:53.618 slat (nsec): min=26069, max=40609, avg=27091.76, stdev=3104.07 00:12:53.618 clat (usec): min=896, max=42072, avg=33893.07, stdev=16372.17 00:12:53.618 lat (usec): min=922, max=42098, avg=33920.16, stdev=16370.70 00:12:53.618 clat percentiles (usec): 00:12:53.618 | 1.00th=[ 898], 5.00th=[ 922], 10.00th=[ 979], 20.00th=[40633], 00:12:53.618 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:12:53.618 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:53.618 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:53.618 | 99.99th=[42206] 00:12:53.618 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:12:53.618 slat (nsec): min=9001, max=54593, avg=28555.37, stdev=9599.28 00:12:53.618 clat (usec): min=213, max=958, avg=587.98, stdev=117.71 00:12:53.618 lat (usec): min=246, max=991, avg=616.53, stdev=121.16 00:12:53.618 clat percentiles (usec): 00:12:53.618 | 1.00th=[ 306], 5.00th=[ 388], 10.00th=[ 441], 20.00th=[ 494], 00:12:53.618 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 619], 00:12:53.618 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 791], 00:12:53.618 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 955], 99.95th=[ 955], 00:12:53.618 | 99.99th=[ 955] 00:12:53.618 bw ( KiB/s): min= 4096, max= 4096, per=47.95%, avg=4096.00, stdev= 0.00, samples=1 00:12:53.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:53.618 lat (usec) : 250=0.19%, 500=20.26%, 750=66.79%, 1000=9.38% 00:12:53.618 lat (msec) : 2=0.19%, 50=3.19% 00:12:53.618 cpu : usr=0.97%, sys=1.94%, ctx=534, majf=0, minf=1 00:12:53.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:53.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.618 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:53.618 00:12:53.618 Run status group 0 (all jobs): 00:12:53.618 READ: bw=5744KiB/s (5882kB/s), 81.4KiB/s-2046KiB/s (83.3kB/s-2095kB/s), io=5928KiB (6070kB), run=1001-1032msec 00:12:53.618 WRITE: bw=8543KiB/s (8748kB/s), 1984KiB/s-2657KiB/s (2032kB/s-2721kB/s), io=8816KiB (9028kB), run=1001-1032msec 00:12:53.618 00:12:53.618 Disk stats (read/write): 00:12:53.618 nvme0n1: ios=365/512, merge=0/0, ticks=437/343, in_queue=780, util=93.09% 00:12:53.618 nvme0n2: ios=438/512, merge=0/0, ticks=639/356, in_queue=995, util=97.16% 00:12:53.618 nvme0n3: ios=525/512, merge=0/0, ticks=608/301, in_queue=909, util=93.84% 00:12:53.618 nvme0n4: ios=50/512, merge=0/0, ticks=1111/239, in_queue=1350, util=98.00% 00:12:53.618 12:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:53.618 [global] 00:12:53.618 thread=1 00:12:53.618 invalidate=1 00:12:53.618 rw=write 00:12:53.618 time_based=1 00:12:53.618 runtime=1 00:12:53.618 ioengine=libaio 00:12:53.618 direct=1 00:12:53.618 bs=4096 00:12:53.618 iodepth=128 00:12:53.618 norandommap=0 00:12:53.618 numjobs=1 00:12:53.618 00:12:53.618 verify_dump=1 00:12:53.618 verify_backlog=512 00:12:53.618 verify_state_save=0 00:12:53.618 do_verify=1 00:12:53.618 verify=crc32c-intel 00:12:53.618 [job0] 00:12:53.618 filename=/dev/nvme0n1 00:12:53.618 [job1] 00:12:53.618 filename=/dev/nvme0n2 00:12:53.618 [job2] 00:12:53.618 filename=/dev/nvme0n3 00:12:53.618 [job3] 00:12:53.618 filename=/dev/nvme0n4 00:12:53.618 Could not set queue depth (nvme0n1) 00:12:53.618 Could not set queue depth (nvme0n2) 00:12:53.618 Could not set queue depth (nvme0n3) 00:12:53.618 Could not set queue depth (nvme0n4) 00:12:53.879 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:53.879 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:53.879 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:53.879 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:53.879 fio-3.35 00:12:53.879 Starting 4 threads 00:12:55.260 00:12:55.260 job0: (groupid=0, jobs=1): err= 0: pid=322048: Thu Jul 25 12:25:28 2024 00:12:55.260 read: IOPS=1019, BW=4077KiB/s (4175kB/s)(4216KiB/1034msec) 00:12:55.260 slat (nsec): min=1890, max=56319k, avg=479159.86, stdev=3754217.80 00:12:55.260 clat (msec): min=17, max=113, avg=55.34, stdev=10.72 00:12:55.260 lat (msec): min=17, max=113, avg=55.82, stdev=11.14 00:12:55.260 clat percentiles (msec): 00:12:55.260 | 1.00th=[ 26], 5.00th=[ 46], 10.00th=[ 46], 20.00th=[ 47], 00:12:55.260 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:12:55.260 | 70.00th=[ 60], 80.00th=[ 60], 90.00th=[ 65], 95.00th=[ 70], 00:12:55.260 | 99.00th=[ 87], 99.50th=[ 96], 99.90th=[ 96], 99.95th=[ 114], 00:12:55.260 | 99.99th=[ 114] 00:12:55.260 write: IOPS=1485, BW=5942KiB/s (6085kB/s)(6144KiB/1034msec); 0 zone resets 00:12:55.260 slat (usec): min=3, max=44454, avg=322.90, stdev=2496.43 00:12:55.260 clat (usec): min=1101, max=129112, avg=47459.26, stdev=21166.45 00:12:55.260 lat (usec): min=1145, max=129120, avg=47782.17, stdev=21377.47 00:12:55.260 clat percentiles (msec): 00:12:55.260 | 1.00th=[ 14], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:12:55.260 | 30.00th=[ 31], 40.00th=[ 39], 50.00th=[ 51], 60.00th=[ 52], 00:12:55.260 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 60], 95.00th=[ 97], 00:12:55.260 | 99.00th=[ 123], 99.50th=[ 126], 99.90th=[ 130], 99.95th=[ 130], 00:12:55.260 | 99.99th=[ 130] 00:12:55.260 bw ( KiB/s): min= 5000, max= 6504, per=15.30%, avg=5752.00, stdev=1063.49, samples=2 00:12:55.260 iops : min= 1250, max= 1626, avg=1438.00, stdev=265.87, samples=2 00:12:55.260 lat (msec) : 2=0.04%, 20=1.39%, 50=40.42%, 100=55.37%, 250=2.78% 00:12:55.260 cpu : usr=1.16%, sys=1.65%, ctx=105, majf=0, minf=1 00:12:55.260 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:12:55.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:55.260 issued rwts: total=1054,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:55.260 job1: (groupid=0, jobs=1): err= 0: pid=322050: Thu Jul 25 12:25:28 2024 00:12:55.260 read: IOPS=2509, BW=9.80MiB/s (10.3MB/s)(10.0MiB/1020msec) 00:12:55.260 slat (nsec): min=1728, max=22455k, avg=174698.09, stdev=1519376.84 00:12:55.260 clat (usec): min=7245, max=63486, avg=27323.64, stdev=9702.87 00:12:55.260 lat (usec): min=7275, max=79764, avg=27498.34, stdev=9844.77 00:12:55.260 clat percentiles (usec): 00:12:55.260 | 1.00th=[10028], 5.00th=[10945], 10.00th=[18220], 20.00th=[22152], 00:12:55.260 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26346], 60.00th=[27132], 00:12:55.260 | 70.00th=[27919], 80.00th=[30278], 90.00th=[34341], 95.00th=[45876], 00:12:55.260 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:12:55.260 | 99.99th=[63701] 00:12:55.260 write: IOPS=2594, BW=10.1MiB/s (10.6MB/s)(10.3MiB/1020msec); 0 zone resets 00:12:55.260 slat (usec): min=2, max=35922, avg=137.74, stdev=1448.89 00:12:55.260 clat (usec): min=551, max=54742, avg=22532.29, stdev=7574.47 00:12:55.260 lat (usec): min=671, max=54765, avg=22670.03, stdev=7735.75 00:12:55.260 clat percentiles (usec): 00:12:55.260 | 1.00th=[ 5604], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[17957], 00:12:55.260 | 30.00th=[20579], 40.00th=[22152], 50.00th=[24511], 60.00th=[25297], 00:12:55.260 | 70.00th=[26084], 80.00th=[26870], 90.00th=[29230], 95.00th=[39060], 00:12:55.260 | 99.00th=[39584], 99.50th=[39584], 99.90th=[48497], 99.95th=[49546], 00:12:55.260 | 99.99th=[54789] 00:12:55.260 bw ( KiB/s): min= 8192, max=12288, per=27.24%, avg=10240.00, stdev=2896.31, samples=2 00:12:55.260 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:12:55.260 lat (usec) : 750=0.02% 00:12:55.260 lat (msec) : 2=0.04%, 10=6.34%, 20=15.87%, 50=75.39%, 100=2.34% 00:12:55.260 cpu : usr=2.65%, sys=2.45%, ctx=159, majf=0, minf=1 00:12:55.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:55.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:55.260 issued rwts: total=2560,2646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:55.260 job2: (groupid=0, jobs=1): err= 0: pid=322056: Thu Jul 25 12:25:28 2024 00:12:55.260 read: IOPS=740, BW=2962KiB/s (3033kB/s)(3024KiB/1021msec) 00:12:55.260 slat (nsec): min=1977, max=43917k, avg=458949.44, stdev=3542435.42 00:12:55.260 clat (msec): min=7, max=103, avg=54.61, stdev=11.40 00:12:55.260 lat (msec): min=15, max=103, avg=55.06, stdev=11.61 00:12:55.260 clat percentiles (msec): 00:12:55.260 | 1.00th=[ 16], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 42], 00:12:55.260 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 60], 00:12:55.260 | 70.00th=[ 60], 80.00th=[ 60], 90.00th=[ 64], 95.00th=[ 70], 00:12:55.260 | 99.00th=[ 84], 99.50th=[ 84], 99.90th=[ 104], 99.95th=[ 104], 00:12:55.260 | 99.99th=[ 104] 00:12:55.260 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:12:55.260 slat (usec): min=6, max=51816, avg=644.63, stdev=4164.56 00:12:55.260 clat (msec): min=8, max=327, avg=86.03, stdev=76.08 00:12:55.260 lat (msec): min=8, max=330, avg=86.68, stdev=76.46 00:12:55.260 clat percentiles (msec): 00:12:55.260 | 1.00th=[ 16], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 50], 00:12:55.260 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 58], 00:12:55.260 | 70.00th=[ 59], 80.00th=[ 67], 90.00th=[ 232], 95.00th=[ 309], 00:12:55.260 | 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:12:55.260 | 99.99th=[ 330] 00:12:55.260 bw ( KiB/s): min= 3920, max= 4272, per=10.90%, avg=4096.00, stdev=248.90, samples=2 00:12:55.260 iops : min= 980, max= 1068, avg=1024.00, stdev=62.23, samples=2 00:12:55.260 lat (msec) : 10=0.39%, 20=0.90%, 50=20.56%, 100=68.09%, 250=5.17% 00:12:55.260 lat (msec) : 500=4.89% 00:12:55.260 cpu : usr=1.18%, sys=0.88%, ctx=86, majf=0, minf=1 00:12:55.260 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:12:55.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.260 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:55.260 issued rwts: total=756,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:55.260 job3: (groupid=0, jobs=1): err= 0: pid=322057: Thu Jul 25 12:25:28 2024 00:12:55.260 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:12:55.260 slat (nsec): min=1223, max=13760k, avg=130778.87, stdev=947066.60 00:12:55.260 clat (usec): min=4751, max=33243, avg=15658.75, stdev=3888.86 00:12:55.260 lat (usec): min=4765, max=33272, avg=15789.53, stdev=3944.89 00:12:55.260 clat percentiles (usec): 00:12:55.260 | 1.00th=[ 5014], 5.00th=[12125], 10.00th=[12387], 20.00th=[13829], 00:12:55.260 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:12:55.260 | 70.00th=[15401], 80.00th=[17695], 90.00th=[21890], 95.00th=[23987], 00:12:55.260 | 99.00th=[26084], 99.50th=[26608], 99.90th=[27395], 99.95th=[27395], 00:12:55.260 | 99.99th=[33162] 00:12:55.260 write: IOPS=4447, BW=17.4MiB/s (18.2MB/s)(17.6MiB/1014msec); 0 zone resets 00:12:55.260 slat (usec): min=2, max=27705, avg=97.89, stdev=601.86 00:12:55.260 clat (usec): min=2978, max=41795, avg=14298.27, stdev=4522.01 00:12:55.260 lat (usec): min=2986, max=41815, avg=14396.17, stdev=4563.52 00:12:55.260 clat percentiles (usec): 00:12:55.260 | 1.00th=[ 3490], 5.00th=[ 6325], 10.00th=[ 9110], 20.00th=[12518], 00:12:55.261 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:12:55.261 | 70.00th=[14484], 80.00th=[14746], 90.00th=[19268], 95.00th=[21627], 00:12:55.261 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:12:55.261 | 99.99th=[41681] 00:12:55.261 bw ( KiB/s): min=16384, max=18680, per=46.64%, avg=17532.00, stdev=1623.52, samples=2 00:12:55.261 iops : min= 4096, max= 4670, avg=4383.00, stdev=405.88, samples=2 00:12:55.261 lat (msec) : 4=1.05%, 10=6.74%, 20=81.34%, 50=10.88% 00:12:55.261 cpu : usr=3.55%, sys=4.15%, ctx=567, majf=0, minf=1 00:12:55.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:55.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:55.261 issued rwts: total=4096,4510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:55.261 00:12:55.261 Run status group 0 (all jobs): 00:12:55.261 READ: bw=32.0MiB/s (33.5MB/s), 2962KiB/s-15.8MiB/s (3033kB/s-16.5MB/s), io=33.1MiB (34.7MB), run=1014-1034msec 00:12:55.261 WRITE: bw=36.7MiB/s (38.5MB/s), 4012KiB/s-17.4MiB/s (4108kB/s-18.2MB/s), io=38.0MiB (39.8MB), run=1014-1034msec 00:12:55.261 00:12:55.261 Disk stats (read/write): 00:12:55.261 nvme0n1: ios=1074/1191, merge=0/0, ticks=55474/46670, in_queue=102144, util=87.88% 00:12:55.261 nvme0n2: ios=2091/2275, merge=0/0, ticks=54880/49182, in_queue=104062, util=93.19% 00:12:55.261 nvme0n3: ios=533/791, merge=0/0, ticks=29032/73800, in_queue=102832, util=89.24% 00:12:55.261 nvme0n4: ios=3566/3584, merge=0/0, ticks=54388/50442, in_queue=104830, util=89.66% 00:12:55.261 12:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:55.261 [global] 00:12:55.261 thread=1 00:12:55.261 invalidate=1 00:12:55.261 rw=randwrite 00:12:55.261 time_based=1 00:12:55.261 runtime=1 00:12:55.261 ioengine=libaio 00:12:55.261 direct=1 00:12:55.261 bs=4096 00:12:55.261 iodepth=128 00:12:55.261 norandommap=0 00:12:55.261 numjobs=1 00:12:55.261 00:12:55.261 verify_dump=1 00:12:55.261 verify_backlog=512 00:12:55.261 verify_state_save=0 00:12:55.261 do_verify=1 00:12:55.261 verify=crc32c-intel 00:12:55.261 [job0] 00:12:55.261 filename=/dev/nvme0n1 00:12:55.261 [job1] 00:12:55.261 filename=/dev/nvme0n2 00:12:55.261 [job2] 00:12:55.261 filename=/dev/nvme0n3 00:12:55.261 [job3] 00:12:55.261 filename=/dev/nvme0n4 00:12:55.261 Could not set queue depth (nvme0n1) 00:12:55.261 Could not set queue depth (nvme0n2) 00:12:55.261 Could not set queue depth (nvme0n3) 00:12:55.261 Could not set queue depth (nvme0n4) 00:12:55.520 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:55.520 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:55.520 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:55.520 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:55.520 fio-3.35 00:12:55.520 Starting 4 threads 00:12:56.914 00:12:56.914 job0: (groupid=0, jobs=1): err= 0: pid=322521: Thu Jul 25 12:25:29 2024 00:12:56.914 read: IOPS=3141, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1018msec) 00:12:56.914 slat (nsec): min=1166, max=23265k, avg=157882.00, stdev=1224896.09 00:12:56.914 clat (usec): min=3390, max=57798, avg=19061.89, stdev=8456.80 00:12:56.914 lat (usec): min=5805, max=57806, avg=19219.77, stdev=8563.43 00:12:56.914 clat percentiles (usec): 00:12:56.914 | 1.00th=[ 7111], 5.00th=[11469], 10.00th=[11600], 20.00th=[11994], 00:12:56.914 | 30.00th=[12256], 40.00th=[12649], 50.00th=[15401], 60.00th=[21365], 00:12:56.914 | 70.00th=[25035], 80.00th=[25560], 90.00th=[27919], 95.00th=[34341], 00:12:56.914 | 99.00th=[47973], 99.50th=[50070], 99.90th=[57934], 99.95th=[57934], 00:12:56.914 | 99.99th=[57934] 00:12:56.914 write: IOPS=3520, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1018msec); 0 zone resets 00:12:56.914 slat (usec): min=2, max=21726, avg=122.01, stdev=962.54 00:12:56.914 clat (usec): min=1943, max=57797, avg=19023.44, stdev=9610.53 00:12:56.914 lat (usec): min=1949, max=57820, avg=19145.44, stdev=9693.39 00:12:56.914 clat percentiles (usec): 00:12:56.914 | 1.00th=[ 2769], 5.00th=[ 7963], 10.00th=[ 9503], 20.00th=[10421], 00:12:56.914 | 30.00th=[10683], 40.00th=[14615], 50.00th=[20579], 60.00th=[22938], 00:12:56.914 | 70.00th=[24511], 80.00th=[25822], 90.00th=[26346], 95.00th=[28443], 00:12:56.914 | 99.00th=[56886], 99.50th=[56886], 99.90th=[57410], 99.95th=[57934], 00:12:56.914 | 99.99th=[57934] 00:12:56.914 bw ( KiB/s): min=13632, max=15024, per=39.89%, avg=14328.00, stdev=984.29, samples=2 00:12:56.914 iops : min= 3408, max= 3756, avg=3582.00, stdev=246.07, samples=2 00:12:56.914 lat (msec) : 2=0.15%, 4=0.90%, 10=7.39%, 20=42.41%, 50=47.69% 00:12:56.914 lat (msec) : 100=1.47% 00:12:56.914 cpu : usr=2.95%, sys=3.05%, ctx=275, majf=0, minf=1 00:12:56.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:56.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:56.914 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:56.914 job1: (groupid=0, jobs=1): err= 0: pid=322530: Thu Jul 25 12:25:29 2024 00:12:56.914 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:12:56.914 slat (nsec): min=925, max=14490k, avg=108502.23, stdev=787955.54 00:12:56.914 clat (usec): min=2338, max=41209, avg=13657.88, stdev=5290.00 00:12:56.914 lat (usec): min=2348, max=41237, avg=13766.38, stdev=5358.93 00:12:56.914 clat percentiles (usec): 00:12:56.914 | 1.00th=[ 4555], 5.00th=[ 4948], 10.00th=[ 8160], 20.00th=[10814], 00:12:56.914 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:12:56.914 | 70.00th=[15139], 80.00th=[17957], 90.00th=[21103], 95.00th=[24511], 00:12:56.914 | 99.00th=[27657], 99.50th=[28705], 99.90th=[30802], 99.95th=[33424], 00:12:56.914 | 99.99th=[41157] 00:12:56.914 write: IOPS=3911, BW=15.3MiB/s (16.0MB/s)(15.5MiB/1012msec); 0 zone resets 00:12:56.914 slat (nsec): min=1551, max=10578k, avg=129341.00, stdev=784406.56 00:12:56.914 clat (usec): min=1208, max=101871, avg=20096.81, stdev=22302.02 00:12:56.914 lat (usec): min=1219, max=101880, avg=20226.15, stdev=22419.80 00:12:56.914 clat percentiles (msec): 00:12:56.914 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 8], 00:12:56.914 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:12:56.914 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 54], 95.00th=[ 82], 00:12:56.914 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 103], 99.95th=[ 103], 00:12:56.914 | 99.99th=[ 103] 00:12:56.914 bw ( KiB/s): min=14472, max=16176, per=42.66%, avg=15324.00, stdev=1204.91, samples=2 00:12:56.914 iops : min= 3618, max= 4044, avg=3831.00, stdev=301.23, samples=2 00:12:56.914 lat (msec) : 2=0.11%, 4=1.37%, 10=19.49%, 20=59.61%, 50=13.82% 00:12:56.914 lat (msec) : 100=5.52%, 250=0.09% 00:12:56.914 cpu : usr=2.97%, sys=4.45%, ctx=452, majf=0, minf=1 00:12:56.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:56.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:56.914 issued rwts: total=3584,3958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:56.914 job2: (groupid=0, jobs=1): err= 0: pid=322531: Thu Jul 25 12:25:29 2024 00:12:56.914 read: IOPS=943, BW=3775KiB/s (3866kB/s)(4032KiB/1068msec) 00:12:56.914 slat (usec): min=2, max=64442, avg=562.26, stdev=4406.14 00:12:56.914 clat (msec): min=20, max=151, avg=74.45, stdev=20.69 00:12:56.914 lat (msec): min=20, max=172, avg=75.01, stdev=21.09 00:12:56.914 clat percentiles (msec): 00:12:56.914 | 1.00th=[ 32], 5.00th=[ 55], 10.00th=[ 55], 20.00th=[ 58], 00:12:56.914 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 75], 00:12:56.914 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 110], 00:12:56.914 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 131], 99.95th=[ 153], 00:12:56.914 | 99.99th=[ 153] 00:12:56.914 write: IOPS=958, BW=3835KiB/s (3927kB/s)(4096KiB/1068msec); 0 zone resets 00:12:56.914 slat (usec): min=6, max=53505, avg=423.20, stdev=3368.07 00:12:56.914 clat (msec): min=11, max=123, avg=58.76, stdev=11.70 00:12:56.914 lat (msec): min=11, max=123, avg=59.18, stdev=12.22 00:12:56.914 clat percentiles (msec): 00:12:56.914 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 50], 20.00th=[ 55], 00:12:56.914 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 63], 00:12:56.914 | 70.00th=[ 67], 80.00th=[ 68], 90.00th=[ 69], 95.00th=[ 69], 00:12:56.914 | 99.00th=[ 82], 99.50th=[ 106], 99.90th=[ 124], 99.95th=[ 124], 00:12:56.914 | 99.99th=[ 124] 00:12:56.914 bw ( KiB/s): min= 4096, max= 4096, per=11.40%, avg=4096.00, stdev= 0.00, samples=2 00:12:56.914 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:12:56.914 lat (msec) : 20=0.69%, 50=7.58%, 100=82.04%, 250=9.69% 00:12:56.914 cpu : usr=0.84%, sys=1.41%, ctx=93, majf=0, minf=1 00:12:56.914 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:12:56.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.914 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:56.914 issued rwts: total=1008,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:56.914 job3: (groupid=0, jobs=1): err= 0: pid=322532: Thu Jul 25 12:25:29 2024 00:12:56.914 read: IOPS=981, BW=3926KiB/s (4020kB/s)(4040KiB/1029msec) 00:12:56.914 slat (usec): min=3, max=65036, avg=570.85, stdev=4826.86 00:12:56.914 clat (msec): min=3, max=154, avg=69.15, stdev=17.80 00:12:56.914 lat (msec): min=18, max=154, avg=69.72, stdev=18.17 00:12:56.914 clat percentiles (msec): 00:12:56.914 | 1.00th=[ 24], 5.00th=[ 56], 10.00th=[ 56], 20.00th=[ 58], 00:12:56.914 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 68], 00:12:56.914 | 70.00th=[ 70], 80.00th=[ 90], 90.00th=[ 91], 95.00th=[ 106], 00:12:56.914 | 99.00th=[ 123], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 155], 00:12:56.914 | 99.99th=[ 155] 00:12:56.914 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:12:56.914 slat (usec): min=6, max=63024, avg=429.78, stdev=3836.73 00:12:56.914 clat (msec): min=10, max=129, avg=59.03, stdev=11.85 00:12:56.914 lat (msec): min=10, max=129, avg=59.46, stdev=12.49 00:12:56.914 clat percentiles (msec): 00:12:56.914 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 53], 20.00th=[ 56], 00:12:56.914 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 63], 00:12:56.914 | 70.00th=[ 67], 80.00th=[ 68], 90.00th=[ 69], 95.00th=[ 69], 00:12:56.914 | 99.00th=[ 69], 99.50th=[ 115], 99.90th=[ 129], 99.95th=[ 130], 00:12:56.914 | 99.99th=[ 130] 00:12:56.914 bw ( KiB/s): min= 4096, max= 4096, per=11.40%, avg=4096.00, stdev= 0.00, samples=2 00:12:56.914 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:12:56.914 lat (msec) : 4=0.05%, 20=1.47%, 50=3.54%, 100=91.54%, 250=3.39% 00:12:56.914 cpu : usr=0.68%, sys=1.46%, ctx=90, majf=0, minf=1 00:12:56.914 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:12:56.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.914 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:56.914 issued rwts: total=1010,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:56.914 00:12:56.914 Run status group 0 (all jobs): 00:12:56.914 READ: bw=32.2MiB/s (33.7MB/s), 3775KiB/s-13.8MiB/s (3866kB/s-14.5MB/s), io=34.4MiB (36.0MB), run=1012-1068msec 00:12:56.914 WRITE: bw=35.1MiB/s (36.8MB/s), 3835KiB/s-15.3MiB/s (3927kB/s-16.0MB/s), io=37.5MiB (39.3MB), run=1012-1068msec 00:12:56.914 00:12:56.915 Disk stats (read/write): 00:12:56.915 nvme0n1: ios=2859/3072, merge=0/0, ticks=48196/54731, in_queue=102927, util=87.88% 00:12:56.915 nvme0n2: ios=2785/2990, merge=0/0, ticks=37542/63947, in_queue=101489, util=89.23% 00:12:56.915 nvme0n3: ios=663/1024, merge=0/0, ticks=41325/59083, in_queue=100408, util=96.34% 00:12:56.915 nvme0n4: ios=678/1024, merge=0/0, ticks=41134/59668, in_queue=100802, util=100.00% 00:12:56.915 12:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:56.915 12:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=322564 00:12:56.915 12:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:56.915 12:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:56.915 [global] 00:12:56.915 thread=1 00:12:56.915 invalidate=1 00:12:56.915 rw=read 00:12:56.915 time_based=1 00:12:56.915 runtime=10 00:12:56.915 ioengine=libaio 00:12:56.915 direct=1 00:12:56.915 bs=4096 00:12:56.915 iodepth=1 00:12:56.915 norandommap=1 00:12:56.915 numjobs=1 00:12:56.915 00:12:56.915 [job0] 00:12:56.915 filename=/dev/nvme0n1 00:12:56.915 [job1] 00:12:56.915 filename=/dev/nvme0n2 00:12:56.915 [job2] 00:12:56.915 filename=/dev/nvme0n3 00:12:56.915 [job3] 00:12:56.915 filename=/dev/nvme0n4 00:12:56.915 Could not set queue depth (nvme0n1) 00:12:56.915 Could not set queue depth (nvme0n2) 00:12:56.915 Could not set queue depth (nvme0n3) 00:12:56.915 Could not set queue depth (nvme0n4) 00:12:57.177 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.177 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.177 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.177 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.177 fio-3.35 00:12:57.177 Starting 4 threads 00:12:59.717 12:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:59.977 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=10162176, buflen=4096 00:12:59.977 fio: pid=322953, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:59.977 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:59.977 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=9932800, buflen=4096 00:12:59.977 fio: pid=322947, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:59.977 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:59.977 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:00.238 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=294912, buflen=4096 00:13:00.238 fio: pid=322915, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:00.238 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:00.238 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:00.238 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=13328384, buflen=4096 00:13:00.238 fio: pid=322932, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:00.499 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:00.499 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:00.499 00:13:00.499 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=322915: Thu Jul 25 12:25:33 2024 00:13:00.499 read: IOPS=24, BW=98.6KiB/s (101kB/s)(288KiB/2921msec) 00:13:00.499 slat (usec): min=9, max=13712, avg=400.36, stdev=2248.53 00:13:00.499 clat (usec): min=602, max=42189, avg=40144.26, stdev=8243.55 00:13:00.499 lat (usec): min=646, max=55065, avg=40549.82, stdev=8601.45 00:13:00.499 clat percentiles (usec): 00:13:00.499 | 1.00th=[ 603], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:13:00.499 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:13:00.499 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:00.499 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:00.499 | 99.99th=[42206] 00:13:00.499 bw ( KiB/s): min= 96, max= 104, per=0.93%, avg=99.20, stdev= 4.38, samples=5 00:13:00.499 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:13:00.499 lat (usec) : 750=1.37%, 1000=1.37% 00:13:00.499 lat (msec) : 2=1.37%, 50=94.52% 00:13:00.499 cpu : usr=0.14%, sys=0.00%, ctx=75, majf=0, minf=1 00:13:00.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:00.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.499 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.499 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:00.499 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=322932: Thu Jul 25 12:25:33 2024 00:13:00.499 read: IOPS=1051, BW=4205KiB/s (4306kB/s)(12.7MiB/3095msec) 00:13:00.499 slat (usec): min=6, max=14943, avg=39.27, stdev=435.43 00:13:00.499 clat (usec): min=215, max=6225, avg=903.69, stdev=190.54 00:13:00.499 lat (usec): min=223, max=15901, avg=942.97, stdev=479.27 00:13:00.499 clat percentiles (usec): 00:13:00.499 | 1.00th=[ 461], 5.00th=[ 594], 10.00th=[ 676], 20.00th=[ 791], 00:13:00.499 | 30.00th=[ 865], 40.00th=[ 914], 50.00th=[ 947], 60.00th=[ 971], 00:13:00.499 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:13:00.499 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1303], 99.95th=[ 4359], 00:13:00.499 | 99.99th=[ 6194] 00:13:00.499 bw ( KiB/s): min= 3745, max= 4824, per=39.78%, avg=4232.17, stdev=392.52, samples=6 00:13:00.499 iops : min= 936, max= 1206, avg=1058.00, stdev=98.19, samples=6 00:13:00.499 lat (usec) : 250=0.12%, 500=2.00%, 750=13.43%, 1000=59.14% 00:13:00.499 lat (msec) : 2=25.19%, 4=0.03%, 10=0.06% 00:13:00.499 cpu : usr=2.00%, sys=3.43%, ctx=3259, majf=0, minf=1 00:13:00.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:00.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.499 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.499 issued rwts: total=3255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:00.499 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=322947: Thu Jul 25 12:25:33 2024 00:13:00.499 read: IOPS=885, BW=3540KiB/s (3625kB/s)(9700KiB/2740msec) 00:13:00.499 slat (usec): min=6, max=17421, avg=37.72, stdev=464.74 00:13:00.499 clat (usec): min=463, max=1610, avg=1085.55, stdev=150.52 00:13:00.499 lat (usec): min=488, max=18512, avg=1123.27, stdev=489.56 00:13:00.499 clat percentiles (usec): 00:13:00.499 | 1.00th=[ 578], 5.00th=[ 775], 10.00th=[ 889], 20.00th=[ 996], 00:13:00.499 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:13:00.499 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1270], 00:13:00.499 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1450], 99.95th=[ 1516], 00:13:00.499 | 99.99th=[ 1614] 00:13:00.499 bw ( KiB/s): min= 3424, max= 4112, per=33.76%, avg=3592.00, stdev=292.19, samples=5 00:13:00.499 iops : min= 856, max= 1028, avg=898.00, stdev=73.05, samples=5 00:13:00.499 lat (usec) : 500=0.29%, 750=4.00%, 1000=16.65% 00:13:00.499 lat (msec) : 2=79.02% 00:13:00.499 cpu : usr=0.88%, sys=2.59%, ctx=2428, majf=0, minf=1 00:13:00.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:00.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.499 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.499 issued rwts: total=2426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:00.499 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=322953: Thu Jul 25 12:25:33 2024 00:13:00.499 read: IOPS=964, BW=3855KiB/s (3948kB/s)(9924KiB/2574msec) 00:13:00.499 slat (nsec): min=6104, max=65565, avg=24286.34, stdev=3029.44 00:13:00.499 clat (usec): min=719, max=1421, avg=1006.57, stdev=90.72 00:13:00.499 lat (usec): min=743, max=1445, avg=1030.86, stdev=90.59 00:13:00.499 clat percentiles (usec): 00:13:00.499 | 1.00th=[ 775], 5.00th=[ 848], 10.00th=[ 881], 20.00th=[ 930], 00:13:00.499 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:13:00.499 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1106], 95.00th=[ 1139], 00:13:00.499 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1287], 99.95th=[ 1352], 00:13:00.499 | 99.99th=[ 1418] 00:13:00.499 bw ( KiB/s): min= 3824, max= 3904, per=36.28%, avg=3860.80, stdev=33.75, samples=5 00:13:00.499 iops : min= 956, max= 976, avg=965.20, stdev= 8.44, samples=5 00:13:00.499 lat (usec) : 750=0.44%, 1000=43.39% 00:13:00.499 lat (msec) : 2=56.12% 00:13:00.499 cpu : usr=0.86%, sys=2.91%, ctx=2483, majf=0, minf=2 00:13:00.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:00.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.499 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.499 issued rwts: total=2482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:00.499 00:13:00.499 Run status group 0 (all jobs): 00:13:00.499 READ: bw=10.4MiB/s (10.9MB/s), 98.6KiB/s-4205KiB/s (101kB/s-4306kB/s), io=32.2MiB (33.7MB), run=2574-3095msec 00:13:00.499 00:13:00.499 Disk stats (read/write): 00:13:00.499 nvme0n1: ios=70/0, merge=0/0, ticks=2809/0, in_queue=2809, util=93.86% 00:13:00.499 nvme0n2: ios=3252/0, merge=0/0, ticks=2833/0, in_queue=2833, util=94.44% 00:13:00.499 nvme0n3: ios=2316/0, merge=0/0, ticks=2439/0, in_queue=2439, util=96.14% 00:13:00.499 nvme0n4: ios=2250/0, merge=0/0, ticks=2173/0, in_queue=2173, util=96.01% 00:13:00.499 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:00.499 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:00.761 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:00.761 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:01.020 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:01.020 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:01.281 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:01.281 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:01.281 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:01.281 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 322564 00:13:01.281 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:01.281 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.541 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.541 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:01.541 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:01.541 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.541 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:01.541 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.541 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:01.541 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:01.541 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:01.541 nvmf hotplug test: fio failed as expected 00:13:01.541 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:01.802 rmmod nvme_tcp 00:13:01.802 rmmod nvme_fabrics 00:13:01.802 rmmod nvme_keyring 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 319481 ']' 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 319481 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 319481 ']' 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 319481 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 319481 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:01.802 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 319481' 00:13:01.803 killing process with pid 319481 00:13:01.803 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 319481 00:13:01.803 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 319481 00:13:02.064 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:02.064 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:02.064 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:02.064 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.064 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.064 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.064 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.064 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.977 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:03.977 00:13:03.977 real 0m30.505s 00:13:03.977 user 2m14.871s 00:13:03.977 sys 0m10.042s 00:13:03.977 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.977 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.977 ************************************ 00:13:03.977 END TEST nvmf_fio_target 00:13:03.977 ************************************ 00:13:03.977 12:25:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:03.977 12:25:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:03.977 12:25:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:03.977 12:25:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.977 12:25:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:04.237 ************************************ 00:13:04.237 START TEST nvmf_bdevio 00:13:04.237 ************************************ 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:04.237 * Looking for test storage... 00:13:04.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:04.237 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:04.238 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.238 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:12.455 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:12.455 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:12.455 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:12.455 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:12.455 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:13:12.456 00:13:12.456 --- 10.0.0.2 ping statistics --- 00:13:12.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.456 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:13:12.456 00:13:12.456 --- 10.0.0.1 ping statistics --- 00:13:12.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.456 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.456 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=328148 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 328148 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 328148 ']' 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.716 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:12.716 [2024-07-25 12:25:45.947876] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:13:12.716 [2024-07-25 12:25:45.947938] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.716 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.716 [2024-07-25 12:25:46.088178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.977 [2024-07-25 12:25:46.248947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.977 [2024-07-25 12:25:46.249038] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.977 [2024-07-25 12:25:46.249065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.977 [2024-07-25 12:25:46.249088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.977 [2024-07-25 12:25:46.249108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.977 [2024-07-25 12:25:46.249312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:12.977 [2024-07-25 12:25:46.249468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:12.977 [2024-07-25 12:25:46.249624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:12.977 [2024-07-25 12:25:46.249639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:13.548 [2024-07-25 12:25:46.863027] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:13.548 Malloc0 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:13.548 [2024-07-25 12:25:46.939219] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:13.548 { 00:13:13.548 "params": { 00:13:13.548 "name": "Nvme$subsystem", 00:13:13.548 "trtype": "$TEST_TRANSPORT", 00:13:13.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:13.548 "adrfam": "ipv4", 00:13:13.548 "trsvcid": "$NVMF_PORT", 00:13:13.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:13.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:13.548 "hdgst": ${hdgst:-false}, 00:13:13.548 "ddgst": ${ddgst:-false} 00:13:13.548 }, 00:13:13.548 "method": "bdev_nvme_attach_controller" 00:13:13.548 } 00:13:13.548 EOF 00:13:13.548 )") 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:13.548 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:13.548 "params": { 00:13:13.548 "name": "Nvme1", 00:13:13.548 "trtype": "tcp", 00:13:13.548 "traddr": "10.0.0.2", 00:13:13.549 "adrfam": "ipv4", 00:13:13.549 "trsvcid": "4420", 00:13:13.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:13.549 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:13.549 "hdgst": false, 00:13:13.549 "ddgst": false 00:13:13.549 }, 00:13:13.549 "method": "bdev_nvme_attach_controller" 00:13:13.549 }' 00:13:13.809 [2024-07-25 12:25:46.995418] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:13:13.809 [2024-07-25 12:25:46.995483] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328244 ] 00:13:13.809 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.809 [2024-07-25 12:25:47.081119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.809 [2024-07-25 12:25:47.177985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.809 [2024-07-25 12:25:47.178142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.809 [2024-07-25 12:25:47.178142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.379 I/O targets: 00:13:14.379 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:14.379 00:13:14.379 00:13:14.379 CUnit - A unit testing framework for C - Version 2.1-3 00:13:14.379 http://cunit.sourceforge.net/ 00:13:14.379 00:13:14.379 00:13:14.379 Suite: bdevio tests on: Nvme1n1 00:13:14.379 Test: blockdev write read block ...passed 00:13:14.379 Test: blockdev write zeroes read block ...passed 00:13:14.379 Test: blockdev write zeroes read no split ...passed 00:13:14.379 Test: blockdev write zeroes read split ...passed 00:13:14.380 Test: blockdev write zeroes read split partial ...passed 00:13:14.380 Test: blockdev reset ...[2024-07-25 12:25:47.636014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:14.380 [2024-07-25 12:25:47.636109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa0fc0 (9): Bad file descriptor 00:13:14.380 [2024-07-25 12:25:47.690231] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:14.380 passed 00:13:14.380 Test: blockdev write read 8 blocks ...passed 00:13:14.380 Test: blockdev write read size > 128k ...passed 00:13:14.380 Test: blockdev write read invalid size ...passed 00:13:14.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:14.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:14.380 Test: blockdev write read max offset ...passed 00:13:14.640 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:14.640 Test: blockdev writev readv 8 blocks ...passed 00:13:14.640 Test: blockdev writev readv 30 x 1block ...passed 00:13:14.640 Test: blockdev writev readv block ...passed 00:13:14.640 Test: blockdev writev readv size > 128k ...passed 00:13:14.640 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:14.640 Test: blockdev comparev and writev ...[2024-07-25 12:25:48.004351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:14.640 [2024-07-25 12:25:48.004437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:14.640 [2024-07-25 12:25:48.004484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:14.640 [2024-07-25 12:25:48.004508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:14.640 [2024-07-25 12:25:48.005443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:14.640 [2024-07-25 12:25:48.005481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:14.640 [2024-07-25 12:25:48.005521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:14.640 [2024-07-25 12:25:48.005545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:14.640 [2024-07-25 12:25:48.006410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:14.640 [2024-07-25 12:25:48.006443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:14.640 [2024-07-25 12:25:48.006483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:14.640 [2024-07-25 12:25:48.006506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:14.640 [2024-07-25 12:25:48.007464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:14.640 [2024-07-25 12:25:48.007498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:14.640 [2024-07-25 12:25:48.007539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:14.640 [2024-07-25 12:25:48.007572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:14.640 passed 00:13:14.900 Test: blockdev nvme passthru rw ...passed 00:13:14.900 Test: blockdev nvme passthru vendor specific ...[2024-07-25 12:25:48.093537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:14.900 [2024-07-25 12:25:48.093584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:14.900 [2024-07-25 12:25:48.094135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:14.900 [2024-07-25 12:25:48.094167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:14.900 [2024-07-25 12:25:48.094688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:14.900 [2024-07-25 12:25:48.094720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:14.900 [2024-07-25 12:25:48.095255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:14.900 [2024-07-25 12:25:48.095286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:14.900 passed 00:13:14.900 Test: blockdev nvme admin passthru ...passed 00:13:14.900 Test: blockdev copy ...passed 00:13:14.900 00:13:14.900 Run Summary: Type Total Ran Passed Failed Inactive 00:13:14.900 suites 1 1 n/a 0 0 00:13:14.900 tests 23 23 23 0 0 00:13:14.900 asserts 152 152 152 0 n/a 00:13:14.900 00:13:14.900 Elapsed time = 1.370 seconds 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.160 rmmod nvme_tcp 00:13:15.160 rmmod nvme_fabrics 00:13:15.160 rmmod nvme_keyring 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 328148 ']' 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 328148 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 328148 ']' 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 328148 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 328148 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 328148' 00:13:15.160 killing process with pid 328148 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 328148 00:13:15.160 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 328148 00:13:15.421 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:15.421 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:15.421 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:15.421 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.421 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.421 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.421 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.421 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.968 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:17.968 00:13:17.968 real 0m13.422s 00:13:17.968 user 0m14.948s 00:13:17.968 sys 0m6.837s 00:13:17.968 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.968 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:17.968 ************************************ 00:13:17.968 END TEST nvmf_bdevio 00:13:17.968 ************************************ 00:13:17.968 12:25:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:17.968 12:25:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:17.968 00:13:17.968 real 5m23.218s 00:13:17.968 user 11m42.654s 00:13:17.968 sys 1m58.262s 00:13:17.968 12:25:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.968 12:25:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:17.968 ************************************ 00:13:17.968 END TEST nvmf_target_core 00:13:17.968 ************************************ 00:13:17.968 12:25:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:17.968 12:25:50 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:17.968 12:25:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:17.968 12:25:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.968 12:25:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:17.968 ************************************ 00:13:17.968 START TEST nvmf_target_extra 00:13:17.968 ************************************ 00:13:17.968 12:25:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:17.968 * Looking for test storage... 00:13:17.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:17.968 ************************************ 00:13:17.968 START TEST nvmf_example 00:13:17.968 ************************************ 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:17.968 * Looking for test storage... 00:13:17.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.968 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.969 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:26.116 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:26.116 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:26.116 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:26.116 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.116 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:26.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:13:26.378 00:13:26.378 --- 10.0.0.2 ping statistics --- 00:13:26.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.378 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:13:26.378 00:13:26.378 --- 10.0.0.1 ping statistics --- 00:13:26.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.378 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=333027 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 333027 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 333027 ']' 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.378 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:26.638 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:27.582 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:27.582 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.832 Initializing NVMe Controllers 00:13:39.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:39.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:39.832 Initialization complete. Launching workers. 00:13:39.832 ======================================================== 00:13:39.832 Latency(us) 00:13:39.832 Device Information : IOPS MiB/s Average min max 00:13:39.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10375.43 40.53 6168.12 846.58 20453.12 00:13:39.832 ======================================================== 00:13:39.832 Total : 10375.43 40.53 6168.12 846.58 20453.12 00:13:39.832 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.832 rmmod nvme_tcp 00:13:39.832 rmmod nvme_fabrics 00:13:39.832 rmmod nvme_keyring 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 333027 ']' 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 333027 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 333027 ']' 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 333027 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 333027 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 333027' 00:13:39.832 killing process with pid 333027 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 333027 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 333027 00:13:39.832 nvmf threads initialize successfully 00:13:39.832 bdev subsystem init successfully 00:13:39.832 created a nvmf target service 00:13:39.832 create targets's poll groups done 00:13:39.832 all subsystems of target started 00:13:39.832 nvmf target is running 00:13:39.832 all subsystems of target stopped 00:13:39.832 destroy targets's poll groups done 00:13:39.832 destroyed the nvmf target service 00:13:39.832 bdev subsystem finish successfully 00:13:39.832 nvmf threads destroy successfully 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.832 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:40.129 00:13:40.129 real 0m22.303s 00:13:40.129 user 0m46.307s 00:13:40.129 sys 0m7.927s 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:40.129 ************************************ 00:13:40.129 END TEST nvmf_example 00:13:40.129 ************************************ 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.129 ************************************ 00:13:40.129 START TEST nvmf_filesystem 00:13:40.129 ************************************ 00:13:40.129 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:40.392 * Looking for test storage... 00:13:40.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:40.392 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:40.393 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:40.393 #define SPDK_CONFIG_H 00:13:40.393 #define SPDK_CONFIG_APPS 1 00:13:40.393 #define SPDK_CONFIG_ARCH native 00:13:40.393 #undef SPDK_CONFIG_ASAN 00:13:40.394 #undef SPDK_CONFIG_AVAHI 00:13:40.394 #undef SPDK_CONFIG_CET 00:13:40.394 #define SPDK_CONFIG_COVERAGE 1 00:13:40.394 #define SPDK_CONFIG_CROSS_PREFIX 00:13:40.394 #undef SPDK_CONFIG_CRYPTO 00:13:40.394 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:40.394 #undef SPDK_CONFIG_CUSTOMOCF 00:13:40.394 #undef SPDK_CONFIG_DAOS 00:13:40.394 #define SPDK_CONFIG_DAOS_DIR 00:13:40.394 #define SPDK_CONFIG_DEBUG 1 00:13:40.394 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:40.394 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:40.394 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:40.394 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:40.394 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:40.394 #undef SPDK_CONFIG_DPDK_UADK 00:13:40.394 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:40.394 #define SPDK_CONFIG_EXAMPLES 1 00:13:40.394 #undef SPDK_CONFIG_FC 00:13:40.394 #define SPDK_CONFIG_FC_PATH 00:13:40.394 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:40.394 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:40.394 #undef SPDK_CONFIG_FUSE 00:13:40.394 #undef SPDK_CONFIG_FUZZER 00:13:40.394 #define SPDK_CONFIG_FUZZER_LIB 00:13:40.394 #undef SPDK_CONFIG_GOLANG 00:13:40.394 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:40.394 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:40.394 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:40.394 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:40.394 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:40.394 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:40.394 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:40.394 #define SPDK_CONFIG_IDXD 1 00:13:40.394 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:40.394 #undef SPDK_CONFIG_IPSEC_MB 00:13:40.394 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:40.394 #define SPDK_CONFIG_ISAL 1 00:13:40.394 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:40.394 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:40.394 #define SPDK_CONFIG_LIBDIR 00:13:40.394 #undef SPDK_CONFIG_LTO 00:13:40.394 #define SPDK_CONFIG_MAX_LCORES 128 00:13:40.394 #define SPDK_CONFIG_NVME_CUSE 1 00:13:40.394 #undef SPDK_CONFIG_OCF 00:13:40.394 #define SPDK_CONFIG_OCF_PATH 00:13:40.394 #define SPDK_CONFIG_OPENSSL_PATH 00:13:40.394 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:40.394 #define SPDK_CONFIG_PGO_DIR 00:13:40.394 #undef SPDK_CONFIG_PGO_USE 00:13:40.394 #define SPDK_CONFIG_PREFIX /usr/local 00:13:40.394 #undef SPDK_CONFIG_RAID5F 00:13:40.394 #undef SPDK_CONFIG_RBD 00:13:40.394 #define SPDK_CONFIG_RDMA 1 00:13:40.394 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:40.394 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:40.394 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:40.394 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:40.394 #define SPDK_CONFIG_SHARED 1 00:13:40.394 #undef SPDK_CONFIG_SMA 00:13:40.394 #define SPDK_CONFIG_TESTS 1 00:13:40.394 #undef SPDK_CONFIG_TSAN 00:13:40.394 #define SPDK_CONFIG_UBLK 1 00:13:40.394 #define SPDK_CONFIG_UBSAN 1 00:13:40.394 #undef SPDK_CONFIG_UNIT_TESTS 00:13:40.394 #undef SPDK_CONFIG_URING 00:13:40.394 #define SPDK_CONFIG_URING_PATH 00:13:40.394 #undef SPDK_CONFIG_URING_ZNS 00:13:40.394 #undef SPDK_CONFIG_USDT 00:13:40.394 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:40.394 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:40.394 #define SPDK_CONFIG_VFIO_USER 1 00:13:40.394 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:40.394 #define SPDK_CONFIG_VHOST 1 00:13:40.394 #define SPDK_CONFIG_VIRTIO 1 00:13:40.394 #undef SPDK_CONFIG_VTUNE 00:13:40.394 #define SPDK_CONFIG_VTUNE_DIR 00:13:40.394 #define SPDK_CONFIG_WERROR 1 00:13:40.394 #define SPDK_CONFIG_WPDK_DIR 00:13:40.394 #undef SPDK_CONFIG_XNVME 00:13:40.394 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:40.394 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:13:40.395 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j128 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 335552 ]] 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 335552 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:13:40.396 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.HNYfi1 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HNYfi1/tests/target /tmp/spdk.HNYfi1 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954712064 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4329717760 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=123687325696 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129376288768 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5688963072 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64676888576 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64688144384 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=11255808 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25851715584 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25875259392 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=23543808 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=339968 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=163840 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64687562752 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64688144384 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=581632 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937621504 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937625600 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:13:40.397 * Looking for test storage... 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=123687325696 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:13:40.397 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7903555584 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.398 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.658 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:40.658 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:40.658 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.658 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.658 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.658 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.658 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.658 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.658 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.658 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.659 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:48.801 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:48.802 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:48.802 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:48.802 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:48.802 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:48.802 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:48.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:13:48.802 00:13:48.802 --- 10.0.0.2 ping statistics --- 00:13:48.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.802 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:13:48.802 00:13:48.802 --- 10.0.0.1 ping statistics --- 00:13:48.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.802 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.802 ************************************ 00:13:48.802 START TEST nvmf_filesystem_no_in_capsule 00:13:48.802 ************************************ 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=339402 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 339402 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 339402 ']' 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.802 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.803 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.803 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:48.803 [2024-07-25 12:26:22.180838] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:13:48.803 [2024-07-25 12:26:22.180899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.803 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.064 [2024-07-25 12:26:22.274182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.064 [2024-07-25 12:26:22.367079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.064 [2024-07-25 12:26:22.367135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.064 [2024-07-25 12:26:22.367142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.064 [2024-07-25 12:26:22.367149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.064 [2024-07-25 12:26:22.367155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.064 [2024-07-25 12:26:22.367284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.064 [2024-07-25 12:26:22.367422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.064 [2024-07-25 12:26:22.367618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.064 [2024-07-25 12:26:22.367645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.007 [2024-07-25 12:26:23.110419] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.007 Malloc1 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.007 [2024-07-25 12:26:23.267297] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:50.007 { 00:13:50.007 "name": "Malloc1", 00:13:50.007 "aliases": [ 00:13:50.007 "9276f7ca-807a-43bc-9302-bca9978f75ed" 00:13:50.007 ], 00:13:50.007 "product_name": "Malloc disk", 00:13:50.007 "block_size": 512, 00:13:50.007 "num_blocks": 1048576, 00:13:50.007 "uuid": "9276f7ca-807a-43bc-9302-bca9978f75ed", 00:13:50.007 "assigned_rate_limits": { 00:13:50.007 "rw_ios_per_sec": 0, 00:13:50.007 "rw_mbytes_per_sec": 0, 00:13:50.007 "r_mbytes_per_sec": 0, 00:13:50.007 "w_mbytes_per_sec": 0 00:13:50.007 }, 00:13:50.007 "claimed": true, 00:13:50.007 "claim_type": "exclusive_write", 00:13:50.007 "zoned": false, 00:13:50.007 "supported_io_types": { 00:13:50.007 "read": true, 00:13:50.007 "write": true, 00:13:50.007 "unmap": true, 00:13:50.007 "flush": true, 00:13:50.007 "reset": true, 00:13:50.007 "nvme_admin": false, 00:13:50.007 "nvme_io": false, 00:13:50.007 "nvme_io_md": false, 00:13:50.007 "write_zeroes": true, 00:13:50.007 "zcopy": true, 00:13:50.007 "get_zone_info": false, 00:13:50.007 "zone_management": false, 00:13:50.007 "zone_append": false, 00:13:50.007 "compare": false, 00:13:50.007 "compare_and_write": false, 00:13:50.007 "abort": true, 00:13:50.007 "seek_hole": false, 00:13:50.007 "seek_data": false, 00:13:50.007 "copy": true, 00:13:50.007 "nvme_iov_md": false 00:13:50.007 }, 00:13:50.007 "memory_domains": [ 00:13:50.007 { 00:13:50.007 "dma_device_id": "system", 00:13:50.007 "dma_device_type": 1 00:13:50.007 }, 00:13:50.007 { 00:13:50.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.007 "dma_device_type": 2 00:13:50.007 } 00:13:50.007 ], 00:13:50.007 "driver_specific": {} 00:13:50.007 } 00:13:50.007 ]' 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:50.007 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:51.949 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:51.949 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:51.949 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.949 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:51.949 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:53.858 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:53.858 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:54.426 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:55.366 ************************************ 00:13:55.366 START TEST filesystem_ext4 00:13:55.366 ************************************ 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:13:55.366 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:55.366 mke2fs 1.46.5 (30-Dec-2021) 00:13:55.626 Discarding device blocks: 0/522240 done 00:13:55.627 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:55.627 Filesystem UUID: ce5b6045-ffa4-4c71-80d2-449811eac4f1 00:13:55.627 Superblock backups stored on blocks: 00:13:55.627 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:55.627 00:13:55.627 Allocating group tables: 0/64 done 00:13:55.627 Writing inode tables: 0/64 done 00:13:58.168 Creating journal (8192 blocks): done 00:13:58.997 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:13:58.997 00:13:58.997 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:13:58.997 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:59.257 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:59.517 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:59.517 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:59.517 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:59.517 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:59.517 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:59.517 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 339402 00:13:59.517 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:59.517 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:59.517 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:59.517 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:59.517 00:13:59.517 real 0m4.056s 00:13:59.517 user 0m0.022s 00:13:59.517 sys 0m0.055s 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:59.518 ************************************ 00:13:59.518 END TEST filesystem_ext4 00:13:59.518 ************************************ 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.518 ************************************ 00:13:59.518 START TEST filesystem_btrfs 00:13:59.518 ************************************ 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:13:59.518 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:00.096 btrfs-progs v6.6.2 00:14:00.096 See https://btrfs.readthedocs.io for more information. 00:14:00.096 00:14:00.096 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:00.096 NOTE: several default settings have changed in version 5.15, please make sure 00:14:00.096 this does not affect your deployments: 00:14:00.096 - DUP for metadata (-m dup) 00:14:00.096 - enabled no-holes (-O no-holes) 00:14:00.096 - enabled free-space-tree (-R free-space-tree) 00:14:00.096 00:14:00.096 Label: (null) 00:14:00.096 UUID: 259ade68-0145-44fd-ad1d-ac5c2fd657a6 00:14:00.096 Node size: 16384 00:14:00.096 Sector size: 4096 00:14:00.096 Filesystem size: 510.00MiB 00:14:00.096 Block group profiles: 00:14:00.096 Data: single 8.00MiB 00:14:00.096 Metadata: DUP 32.00MiB 00:14:00.096 System: DUP 8.00MiB 00:14:00.096 SSD detected: yes 00:14:00.096 Zoned device: no 00:14:00.096 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:00.096 Runtime features: free-space-tree 00:14:00.096 Checksum: crc32c 00:14:00.096 Number of devices: 1 00:14:00.096 Devices: 00:14:00.096 ID SIZE PATH 00:14:00.096 1 510.00MiB /dev/nvme0n1p1 00:14:00.096 00:14:00.096 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:14:00.096 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:00.358 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:00.358 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:00.358 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:00.358 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:00.358 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:00.358 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 339402 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:00.619 00:14:00.619 real 0m0.958s 00:14:00.619 user 0m0.021s 00:14:00.619 sys 0m0.067s 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:00.619 ************************************ 00:14:00.619 END TEST filesystem_btrfs 00:14:00.619 ************************************ 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:00.619 ************************************ 00:14:00.619 START TEST filesystem_xfs 00:14:00.619 ************************************ 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:14:00.619 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:00.620 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:00.620 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:00.620 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:14:00.620 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:00.620 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:14:00.620 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:14:00.620 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:14:00.620 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:14:00.620 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:00.620 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:00.620 = sectsz=512 attr=2, projid32bit=1 00:14:00.620 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:00.620 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:00.620 data = bsize=4096 blocks=130560, imaxpct=25 00:14:00.620 = sunit=0 swidth=0 blks 00:14:00.620 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:00.620 log =internal log bsize=4096 blocks=16384, version=2 00:14:00.620 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:00.620 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:01.560 Discarding blocks...Done. 00:14:01.560 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:14:01.560 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 339402 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:03.472 00:14:03.472 real 0m2.890s 00:14:03.472 user 0m0.026s 00:14:03.472 sys 0m0.052s 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:03.472 ************************************ 00:14:03.472 END TEST filesystem_xfs 00:14:03.472 ************************************ 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:03.472 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:03.733 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:03.733 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:03.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.733 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:03.733 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:03.733 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:03.733 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 339402 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 339402 ']' 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 339402 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 339402 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 339402' 00:14:03.733 killing process with pid 339402 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 339402 00:14:03.733 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 339402 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:03.994 00:14:03.994 real 0m15.194s 00:14:03.994 user 0m59.697s 00:14:03.994 sys 0m1.231s 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.994 ************************************ 00:14:03.994 END TEST nvmf_filesystem_no_in_capsule 00:14:03.994 ************************************ 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.994 ************************************ 00:14:03.994 START TEST nvmf_filesystem_in_capsule 00:14:03.994 ************************************ 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=342083 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 342083 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 342083 ']' 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.994 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:04.254 [2024-07-25 12:26:37.453237] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:14:04.254 [2024-07-25 12:26:37.453291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.254 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.254 [2024-07-25 12:26:37.545802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.254 [2024-07-25 12:26:37.624122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.254 [2024-07-25 12:26:37.624166] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.254 [2024-07-25 12:26:37.624173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.254 [2024-07-25 12:26:37.624180] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.254 [2024-07-25 12:26:37.624185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.254 [2024-07-25 12:26:37.624305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.254 [2024-07-25 12:26:37.624486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.254 [2024-07-25 12:26:37.624632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.254 [2024-07-25 12:26:37.624634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:05.197 [2024-07-25 12:26:38.326600] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:05.197 Malloc1 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:05.197 [2024-07-25 12:26:38.458294] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:05.197 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:05.198 { 00:14:05.198 "name": "Malloc1", 00:14:05.198 "aliases": [ 00:14:05.198 "22a073b6-be0b-45e1-9a2d-39e8c1e78e8b" 00:14:05.198 ], 00:14:05.198 "product_name": "Malloc disk", 00:14:05.198 "block_size": 512, 00:14:05.198 "num_blocks": 1048576, 00:14:05.198 "uuid": "22a073b6-be0b-45e1-9a2d-39e8c1e78e8b", 00:14:05.198 "assigned_rate_limits": { 00:14:05.198 "rw_ios_per_sec": 0, 00:14:05.198 "rw_mbytes_per_sec": 0, 00:14:05.198 "r_mbytes_per_sec": 0, 00:14:05.198 "w_mbytes_per_sec": 0 00:14:05.198 }, 00:14:05.198 "claimed": true, 00:14:05.198 "claim_type": "exclusive_write", 00:14:05.198 "zoned": false, 00:14:05.198 "supported_io_types": { 00:14:05.198 "read": true, 00:14:05.198 "write": true, 00:14:05.198 "unmap": true, 00:14:05.198 "flush": true, 00:14:05.198 "reset": true, 00:14:05.198 "nvme_admin": false, 00:14:05.198 "nvme_io": false, 00:14:05.198 "nvme_io_md": false, 00:14:05.198 "write_zeroes": true, 00:14:05.198 "zcopy": true, 00:14:05.198 "get_zone_info": false, 00:14:05.198 "zone_management": false, 00:14:05.198 "zone_append": false, 00:14:05.198 "compare": false, 00:14:05.198 "compare_and_write": false, 00:14:05.198 "abort": true, 00:14:05.198 "seek_hole": false, 00:14:05.198 "seek_data": false, 00:14:05.198 "copy": true, 00:14:05.198 "nvme_iov_md": false 00:14:05.198 }, 00:14:05.198 "memory_domains": [ 00:14:05.198 { 00:14:05.198 "dma_device_id": "system", 00:14:05.198 "dma_device_type": 1 00:14:05.198 }, 00:14:05.198 { 00:14:05.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.198 "dma_device_type": 2 00:14:05.198 } 00:14:05.198 ], 00:14:05.198 "driver_specific": {} 00:14:05.198 } 00:14:05.198 ]' 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:05.198 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.109 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:07.109 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:07.109 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.109 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:07.109 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:09.021 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:09.591 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:10.975 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:10.975 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:10.975 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:10.975 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.975 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:10.975 ************************************ 00:14:10.975 START TEST filesystem_in_capsule_ext4 00:14:10.975 ************************************ 00:14:10.975 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:10.975 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:10.975 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:10.976 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:10.976 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:14:10.976 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:10.976 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:14:10.976 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:14:10.976 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:14:10.976 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:14:10.976 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:10.976 mke2fs 1.46.5 (30-Dec-2021) 00:14:10.976 Discarding device blocks: 0/522240 done 00:14:10.976 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:10.976 Filesystem UUID: b8940915-bf57-4e78-8b02-834131fb5e91 00:14:10.976 Superblock backups stored on blocks: 00:14:10.976 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:10.976 00:14:10.976 Allocating group tables: 0/64 done 00:14:10.976 Writing inode tables: 0/64 done 00:14:10.976 Creating journal (8192 blocks): done 00:14:11.916 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:14:11.916 00:14:11.916 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:14:11.916 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:12.487 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:12.487 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:12.487 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:12.487 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:12.487 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:12.487 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:12.487 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 342083 00:14:12.487 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:12.487 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:12.487 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:12.488 00:14:12.488 real 0m1.769s 00:14:12.488 user 0m0.035s 00:14:12.488 sys 0m0.038s 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:12.488 ************************************ 00:14:12.488 END TEST filesystem_in_capsule_ext4 00:14:12.488 ************************************ 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:12.488 ************************************ 00:14:12.488 START TEST filesystem_in_capsule_btrfs 00:14:12.488 ************************************ 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:14:12.488 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:12.748 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:14:12.748 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:14:12.748 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:14:12.748 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:14:12.748 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:12.748 btrfs-progs v6.6.2 00:14:12.748 See https://btrfs.readthedocs.io for more information. 00:14:12.748 00:14:12.748 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:12.748 NOTE: several default settings have changed in version 5.15, please make sure 00:14:12.748 this does not affect your deployments: 00:14:12.748 - DUP for metadata (-m dup) 00:14:12.748 - enabled no-holes (-O no-holes) 00:14:12.748 - enabled free-space-tree (-R free-space-tree) 00:14:12.748 00:14:12.748 Label: (null) 00:14:12.748 UUID: ec115204-8b83-42f8-89b9-655a5a9c5068 00:14:12.748 Node size: 16384 00:14:12.748 Sector size: 4096 00:14:12.748 Filesystem size: 510.00MiB 00:14:12.749 Block group profiles: 00:14:12.749 Data: single 8.00MiB 00:14:12.749 Metadata: DUP 32.00MiB 00:14:12.749 System: DUP 8.00MiB 00:14:12.749 SSD detected: yes 00:14:12.749 Zoned device: no 00:14:12.749 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:12.749 Runtime features: free-space-tree 00:14:12.749 Checksum: crc32c 00:14:12.749 Number of devices: 1 00:14:12.749 Devices: 00:14:12.749 ID SIZE PATH 00:14:12.749 1 510.00MiB /dev/nvme0n1p1 00:14:12.749 00:14:12.749 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:14:12.749 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:13.320 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:13.320 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:13.320 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:13.320 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:13.320 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:13.320 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:13.320 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 342083 00:14:13.320 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:13.320 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:13.580 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:13.580 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:13.581 00:14:13.581 real 0m0.847s 00:14:13.581 user 0m0.024s 00:14:13.581 sys 0m0.066s 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:13.581 ************************************ 00:14:13.581 END TEST filesystem_in_capsule_btrfs 00:14:13.581 ************************************ 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.581 ************************************ 00:14:13.581 START TEST filesystem_in_capsule_xfs 00:14:13.581 ************************************ 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:14:13.581 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:13.581 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:13.581 = sectsz=512 attr=2, projid32bit=1 00:14:13.581 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:13.581 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:13.581 data = bsize=4096 blocks=130560, imaxpct=25 00:14:13.581 = sunit=0 swidth=0 blks 00:14:13.581 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:13.581 log =internal log bsize=4096 blocks=16384, version=2 00:14:13.581 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:13.581 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:14.961 Discarding blocks...Done. 00:14:14.961 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:14:14.961 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:16.873 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:16.873 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:16.873 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:16.873 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:16.873 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:16.873 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:16.874 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 342083 00:14:16.874 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:16.874 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:16.874 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:16.874 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:16.874 00:14:16.874 real 0m3.312s 00:14:16.874 user 0m0.022s 00:14:16.874 sys 0m0.059s 00:14:16.874 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:16.874 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:16.874 ************************************ 00:14:16.874 END TEST filesystem_in_capsule_xfs 00:14:16.874 ************************************ 00:14:16.874 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:16.874 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:16.874 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:16.874 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.134 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:17.134 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:17.134 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:17.134 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.134 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:17.134 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.134 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:17.134 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.134 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.134 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.134 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 342083 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 342083 ']' 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 342083 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 342083 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 342083' 00:14:17.135 killing process with pid 342083 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 342083 00:14:17.135 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 342083 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:17.395 00:14:17.395 real 0m13.264s 00:14:17.395 user 0m52.134s 00:14:17.395 sys 0m1.121s 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.395 ************************************ 00:14:17.395 END TEST nvmf_filesystem_in_capsule 00:14:17.395 ************************************ 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:17.395 rmmod nvme_tcp 00:14:17.395 rmmod nvme_fabrics 00:14:17.395 rmmod nvme_keyring 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:14:17.395 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:17.396 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.396 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.396 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.396 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.396 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.396 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.396 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.396 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.938 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.938 00:14:19.938 real 0m39.307s 00:14:19.938 user 1m54.286s 00:14:19.938 sys 0m8.665s 00:14:19.938 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.938 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:19.938 ************************************ 00:14:19.938 END TEST nvmf_filesystem 00:14:19.938 ************************************ 00:14:19.938 12:26:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:19.938 12:26:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:19.938 12:26:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:19.938 12:26:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.938 12:26:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:19.938 ************************************ 00:14:19.938 START TEST nvmf_target_discovery 00:14:19.938 ************************************ 00:14:19.938 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:19.938 * Looking for test storage... 00:14:19.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.938 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.939 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:28.077 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:28.077 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:14:28.077 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:28.077 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:28.077 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:28.077 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:28.077 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:28.078 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:28.078 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:28.078 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:28.078 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.078 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:28.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:14:28.339 00:14:28.339 --- 10.0.0.2 ping statistics --- 00:14:28.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.339 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:14:28.339 00:14:28.339 --- 10.0.0.1 ping statistics --- 00:14:28.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.339 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:28.339 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=348974 00:14:28.340 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 348974 00:14:28.340 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 348974 ']' 00:14:28.340 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:28.340 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.340 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.340 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.340 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.340 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:28.340 [2024-07-25 12:27:01.599985] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:14:28.340 [2024-07-25 12:27:01.600048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.340 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.340 [2024-07-25 12:27:01.693986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.600 [2024-07-25 12:27:01.787218] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.600 [2024-07-25 12:27:01.787279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.600 [2024-07-25 12:27:01.787293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.601 [2024-07-25 12:27:01.787299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.601 [2024-07-25 12:27:01.787305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.601 [2024-07-25 12:27:01.787440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.601 [2024-07-25 12:27:01.787611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.601 [2024-07-25 12:27:01.787775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.601 [2024-07-25 12:27:01.787775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.249 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.249 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:14:29.249 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.249 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.249 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.249 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.249 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.249 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.249 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.249 [2024-07-25 12:27:02.533486] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.249 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.250 Null1 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.250 [2024-07-25 12:27:02.598207] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.250 Null2 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.250 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.515 Null3 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.515 Null4 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.515 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 4420 00:14:29.516 00:14:29.516 Discovery Log Number of Records 6, Generation counter 6 00:14:29.516 =====Discovery Log Entry 0====== 00:14:29.516 trtype: tcp 00:14:29.516 adrfam: ipv4 00:14:29.516 subtype: current discovery subsystem 00:14:29.516 treq: not required 00:14:29.516 portid: 0 00:14:29.516 trsvcid: 4420 00:14:29.516 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:29.516 traddr: 10.0.0.2 00:14:29.516 eflags: explicit discovery connections, duplicate discovery information 00:14:29.516 sectype: none 00:14:29.516 =====Discovery Log Entry 1====== 00:14:29.516 trtype: tcp 00:14:29.516 adrfam: ipv4 00:14:29.516 subtype: nvme subsystem 00:14:29.516 treq: not required 00:14:29.516 portid: 0 00:14:29.516 trsvcid: 4420 00:14:29.516 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:29.516 traddr: 10.0.0.2 00:14:29.516 eflags: none 00:14:29.516 sectype: none 00:14:29.516 =====Discovery Log Entry 2====== 00:14:29.516 trtype: tcp 00:14:29.516 adrfam: ipv4 00:14:29.516 subtype: nvme subsystem 00:14:29.516 treq: not required 00:14:29.516 portid: 0 00:14:29.516 trsvcid: 4420 00:14:29.516 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:29.516 traddr: 10.0.0.2 00:14:29.516 eflags: none 00:14:29.516 sectype: none 00:14:29.516 =====Discovery Log Entry 3====== 00:14:29.516 trtype: tcp 00:14:29.516 adrfam: ipv4 00:14:29.516 subtype: nvme subsystem 00:14:29.516 treq: not required 00:14:29.516 portid: 0 00:14:29.516 trsvcid: 4420 00:14:29.516 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:29.516 traddr: 10.0.0.2 00:14:29.516 eflags: none 00:14:29.516 sectype: none 00:14:29.516 =====Discovery Log Entry 4====== 00:14:29.516 trtype: tcp 00:14:29.516 adrfam: ipv4 00:14:29.516 subtype: nvme subsystem 00:14:29.516 treq: not required 00:14:29.516 portid: 0 00:14:29.516 trsvcid: 4420 00:14:29.516 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:29.516 traddr: 10.0.0.2 00:14:29.516 eflags: none 00:14:29.516 sectype: none 00:14:29.516 =====Discovery Log Entry 5====== 00:14:29.516 trtype: tcp 00:14:29.516 adrfam: ipv4 00:14:29.516 subtype: discovery subsystem referral 00:14:29.516 treq: not required 00:14:29.516 portid: 0 00:14:29.516 trsvcid: 4430 00:14:29.516 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:29.516 traddr: 10.0.0.2 00:14:29.516 eflags: none 00:14:29.516 sectype: none 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:29.516 Perform nvmf subsystem discovery via RPC 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.516 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.516 [ 00:14:29.516 { 00:14:29.516 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:29.516 "subtype": "Discovery", 00:14:29.516 "listen_addresses": [ 00:14:29.516 { 00:14:29.516 "trtype": "TCP", 00:14:29.516 "adrfam": "IPv4", 00:14:29.516 "traddr": "10.0.0.2", 00:14:29.516 "trsvcid": "4420" 00:14:29.516 } 00:14:29.516 ], 00:14:29.516 "allow_any_host": true, 00:14:29.516 "hosts": [] 00:14:29.516 }, 00:14:29.516 { 00:14:29.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.516 "subtype": "NVMe", 00:14:29.516 "listen_addresses": [ 00:14:29.516 { 00:14:29.516 "trtype": "TCP", 00:14:29.516 "adrfam": "IPv4", 00:14:29.516 "traddr": "10.0.0.2", 00:14:29.516 "trsvcid": "4420" 00:14:29.516 } 00:14:29.516 ], 00:14:29.516 "allow_any_host": true, 00:14:29.516 "hosts": [], 00:14:29.516 "serial_number": "SPDK00000000000001", 00:14:29.516 "model_number": "SPDK bdev Controller", 00:14:29.516 "max_namespaces": 32, 00:14:29.516 "min_cntlid": 1, 00:14:29.516 "max_cntlid": 65519, 00:14:29.516 "namespaces": [ 00:14:29.516 { 00:14:29.516 "nsid": 1, 00:14:29.516 "bdev_name": "Null1", 00:14:29.516 "name": "Null1", 00:14:29.516 "nguid": "1E6F615DDD55475D9DAD639B712A5E40", 00:14:29.516 "uuid": "1e6f615d-dd55-475d-9dad-639b712a5e40" 00:14:29.516 } 00:14:29.516 ] 00:14:29.516 }, 00:14:29.516 { 00:14:29.516 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:29.516 "subtype": "NVMe", 00:14:29.516 "listen_addresses": [ 00:14:29.516 { 00:14:29.777 "trtype": "TCP", 00:14:29.777 "adrfam": "IPv4", 00:14:29.777 "traddr": "10.0.0.2", 00:14:29.777 "trsvcid": "4420" 00:14:29.777 } 00:14:29.777 ], 00:14:29.777 "allow_any_host": true, 00:14:29.777 "hosts": [], 00:14:29.777 "serial_number": "SPDK00000000000002", 00:14:29.777 "model_number": "SPDK bdev Controller", 00:14:29.777 "max_namespaces": 32, 00:14:29.777 "min_cntlid": 1, 00:14:29.777 "max_cntlid": 65519, 00:14:29.777 "namespaces": [ 00:14:29.777 { 00:14:29.777 "nsid": 1, 00:14:29.777 "bdev_name": "Null2", 00:14:29.777 "name": "Null2", 00:14:29.777 "nguid": "1D190BB0C21D41BF88C6A8DC06C865E3", 00:14:29.777 "uuid": "1d190bb0-c21d-41bf-88c6-a8dc06c865e3" 00:14:29.777 } 00:14:29.777 ] 00:14:29.777 }, 00:14:29.777 { 00:14:29.777 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:29.777 "subtype": "NVMe", 00:14:29.777 "listen_addresses": [ 00:14:29.777 { 00:14:29.777 "trtype": "TCP", 00:14:29.777 "adrfam": "IPv4", 00:14:29.777 "traddr": "10.0.0.2", 00:14:29.777 "trsvcid": "4420" 00:14:29.777 } 00:14:29.777 ], 00:14:29.777 "allow_any_host": true, 00:14:29.777 "hosts": [], 00:14:29.777 "serial_number": "SPDK00000000000003", 00:14:29.777 "model_number": "SPDK bdev Controller", 00:14:29.777 "max_namespaces": 32, 00:14:29.777 "min_cntlid": 1, 00:14:29.777 "max_cntlid": 65519, 00:14:29.777 "namespaces": [ 00:14:29.777 { 00:14:29.777 "nsid": 1, 00:14:29.777 "bdev_name": "Null3", 00:14:29.777 "name": "Null3", 00:14:29.777 "nguid": "47B452ACF8434C23BACB9F8EFDF18800", 00:14:29.777 "uuid": "47b452ac-f843-4c23-bacb-9f8efdf18800" 00:14:29.777 } 00:14:29.777 ] 00:14:29.777 }, 00:14:29.777 { 00:14:29.777 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:29.777 "subtype": "NVMe", 00:14:29.777 "listen_addresses": [ 00:14:29.777 { 00:14:29.777 "trtype": "TCP", 00:14:29.777 "adrfam": "IPv4", 00:14:29.777 "traddr": "10.0.0.2", 00:14:29.777 "trsvcid": "4420" 00:14:29.777 } 00:14:29.777 ], 00:14:29.777 "allow_any_host": true, 00:14:29.777 "hosts": [], 00:14:29.777 "serial_number": "SPDK00000000000004", 00:14:29.777 "model_number": "SPDK bdev Controller", 00:14:29.777 "max_namespaces": 32, 00:14:29.777 "min_cntlid": 1, 00:14:29.777 "max_cntlid": 65519, 00:14:29.777 "namespaces": [ 00:14:29.777 { 00:14:29.777 "nsid": 1, 00:14:29.777 "bdev_name": "Null4", 00:14:29.777 "name": "Null4", 00:14:29.777 "nguid": "D5EDDE33D79C44F6A6C8293496907B67", 00:14:29.777 "uuid": "d5edde33-d79c-44f6-a6c8-293496907b67" 00:14:29.777 } 00:14:29.777 ] 00:14:29.777 } 00:14:29.777 ] 00:14:29.777 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.778 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:29.778 rmmod nvme_tcp 00:14:29.778 rmmod nvme_fabrics 00:14:29.778 rmmod nvme_keyring 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 348974 ']' 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 348974 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 348974 ']' 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 348974 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:29.778 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 348974 00:14:30.055 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 348974' 00:14:30.056 killing process with pid 348974 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 348974 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 348974 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.056 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.604 00:14:32.604 real 0m12.522s 00:14:32.604 user 0m9.001s 00:14:32.604 sys 0m6.662s 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.604 ************************************ 00:14:32.604 END TEST nvmf_target_discovery 00:14:32.604 ************************************ 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.604 ************************************ 00:14:32.604 START TEST nvmf_referrals 00:14:32.604 ************************************ 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:32.604 * Looking for test storage... 00:14:32.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:32.604 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:14:32.605 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:40.744 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:40.744 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:40.744 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:40.744 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.744 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.744 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.745 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.745 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.745 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:41.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:14:41.006 00:14:41.006 --- 10.0.0.2 ping statistics --- 00:14:41.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.006 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:14:41.006 00:14:41.006 --- 10.0.0.1 ping statistics --- 00:14:41.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.006 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=354217 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 354217 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 354217 ']' 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.006 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:41.006 [2024-07-25 12:27:14.313703] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:14:41.006 [2024-07-25 12:27:14.313770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.006 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.006 [2024-07-25 12:27:14.406932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.267 [2024-07-25 12:27:14.500151] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.267 [2024-07-25 12:27:14.500208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.267 [2024-07-25 12:27:14.500216] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.267 [2024-07-25 12:27:14.500222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.267 [2024-07-25 12:27:14.500228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.267 [2024-07-25 12:27:14.500360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.267 [2024-07-25 12:27:14.500520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.267 [2024-07-25 12:27:14.500679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.267 [2024-07-25 12:27:14.500679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.838 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.838 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:14:41.838 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.838 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.838 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:41.838 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.838 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.838 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.838 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:41.838 [2024-07-25 12:27:15.245410] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.098 [2024-07-25 12:27:15.265966] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.098 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:42.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:42.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:42.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:42.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:42.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:42.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:42.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:42.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:42.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.359 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:42.360 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:42.620 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:42.881 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:42.882 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.142 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:43.402 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:43.402 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:43.402 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.402 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.402 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.402 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.403 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.663 rmmod nvme_tcp 00:14:43.663 rmmod nvme_fabrics 00:14:43.663 rmmod nvme_keyring 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 354217 ']' 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 354217 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 354217 ']' 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 354217 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 354217 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 354217' 00:14:43.663 killing process with pid 354217 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 354217 00:14:43.663 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 354217 00:14:43.924 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:43.924 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:43.924 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:43.924 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.924 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.924 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.924 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.924 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.834 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:45.834 00:14:45.834 real 0m13.665s 00:14:45.834 user 0m13.668s 00:14:45.834 sys 0m7.066s 00:14:45.834 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:45.834 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:45.834 ************************************ 00:14:45.834 END TEST nvmf_referrals 00:14:45.834 ************************************ 00:14:45.834 12:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:45.834 12:27:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:46.095 ************************************ 00:14:46.095 START TEST nvmf_connect_disconnect 00:14:46.095 ************************************ 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:46.095 * Looking for test storage... 00:14:46.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.095 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:14:46.096 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:54.237 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:54.237 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:54.237 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:54.237 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:14:54.237 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.238 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:54.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:14:54.498 00:14:54.498 --- 10.0.0.2 ping statistics --- 00:14:54.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.498 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:14:54.498 00:14:54.498 --- 10.0.0.1 ping statistics --- 00:14:54.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.498 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=359188 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 359188 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 359188 ']' 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.498 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:54.758 [2024-07-25 12:27:27.966969] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:14:54.758 [2024-07-25 12:27:27.967030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.758 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.758 [2024-07-25 12:27:28.060065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.758 [2024-07-25 12:27:28.152644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.758 [2024-07-25 12:27:28.152700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.759 [2024-07-25 12:27:28.152709] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.759 [2024-07-25 12:27:28.152715] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.759 [2024-07-25 12:27:28.152721] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.759 [2024-07-25 12:27:28.152885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.759 [2024-07-25 12:27:28.153019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.759 [2024-07-25 12:27:28.153173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.759 [2024-07-25 12:27:28.153173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:55.699 [2024-07-25 12:27:28.907418] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:55.699 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.700 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.700 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.700 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:55.700 [2024-07-25 12:27:28.980944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.700 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.700 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:55.700 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:55.700 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:59.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:14.095 rmmod nvme_tcp 00:15:14.095 rmmod nvme_fabrics 00:15:14.095 rmmod nvme_keyring 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 359188 ']' 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 359188 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 359188 ']' 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 359188 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 359188 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:14.095 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:14.096 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 359188' 00:15:14.096 killing process with pid 359188 00:15:14.096 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 359188 00:15:14.096 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 359188 00:15:14.356 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.356 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.356 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.356 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.356 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.356 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.356 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.356 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.271 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.271 00:15:16.271 real 0m30.366s 00:15:16.271 user 1m19.574s 00:15:16.271 sys 0m7.608s 00:15:16.271 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.271 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:16.271 ************************************ 00:15:16.271 END TEST nvmf_connect_disconnect 00:15:16.271 ************************************ 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:16.533 ************************************ 00:15:16.533 START TEST nvmf_multitarget 00:15:16.533 ************************************ 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:16.533 * Looking for test storage... 00:15:16.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:16.533 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.534 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.673 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:24.674 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:24.674 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:24.674 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:24.674 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:24.674 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.674 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.674 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:24.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:15:24.935 00:15:24.935 --- 10.0.0.2 ping statistics --- 00:15:24.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.935 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:24.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:15:24.935 00:15:24.935 --- 10.0.0.1 ping statistics --- 00:15:24.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.935 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=366839 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 366839 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 366839 ']' 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.935 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:24.935 [2024-07-25 12:27:58.328919] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:15:24.935 [2024-07-25 12:27:58.328982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.196 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.196 [2024-07-25 12:27:58.425730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.196 [2024-07-25 12:27:58.522397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.196 [2024-07-25 12:27:58.522462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.196 [2024-07-25 12:27:58.522469] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.196 [2024-07-25 12:27:58.522476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.196 [2024-07-25 12:27:58.522482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.196 [2024-07-25 12:27:58.522576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.196 [2024-07-25 12:27:58.522705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.196 [2024-07-25 12:27:58.522845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.196 [2024-07-25 12:27:58.522846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:26.141 "nvmf_tgt_1" 00:15:26.141 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:26.403 "nvmf_tgt_2" 00:15:26.403 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:26.403 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:26.403 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:26.403 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:26.663 true 00:15:26.663 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:26.663 true 00:15:26.663 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:26.663 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:26.930 rmmod nvme_tcp 00:15:26.930 rmmod nvme_fabrics 00:15:26.930 rmmod nvme_keyring 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 366839 ']' 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 366839 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 366839 ']' 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 366839 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 366839 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 366839' 00:15:26.930 killing process with pid 366839 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 366839 00:15:26.930 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 366839 00:15:27.191 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.191 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.191 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.191 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.191 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.191 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.191 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.191 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.100 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.100 00:15:29.100 real 0m12.762s 00:15:29.100 user 0m10.902s 00:15:29.100 sys 0m6.776s 00:15:29.100 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:29.100 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:29.100 ************************************ 00:15:29.100 END TEST nvmf_multitarget 00:15:29.100 ************************************ 00:15:29.361 12:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:29.361 12:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:29.361 12:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:29.361 12:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.361 12:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:29.361 ************************************ 00:15:29.361 START TEST nvmf_rpc 00:15:29.361 ************************************ 00:15:29.361 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:29.362 * Looking for test storage... 00:15:29.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:29.362 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:37.502 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:37.502 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:37.502 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:37.502 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:37.502 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.503 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.503 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:37.503 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.503 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.503 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:37.503 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:37.503 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.503 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.763 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.764 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.764 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:37.764 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:37.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:15:37.764 00:15:37.764 --- 10.0.0.2 ping statistics --- 00:15:37.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.764 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:15:37.764 00:15:37.764 --- 10.0.0.1 ping statistics --- 00:15:37.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.764 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=371580 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 371580 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 371580 ']' 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.764 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.025 [2024-07-25 12:28:11.225542] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:15:38.025 [2024-07-25 12:28:11.225609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.025 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.025 [2024-07-25 12:28:11.305671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.025 [2024-07-25 12:28:11.402154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.025 [2024-07-25 12:28:11.402214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.025 [2024-07-25 12:28:11.402222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.025 [2024-07-25 12:28:11.402228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.025 [2024-07-25 12:28:11.402233] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.025 [2024-07-25 12:28:11.402387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.025 [2024-07-25 12:28:11.402541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.025 [2024-07-25 12:28:11.402725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.025 [2024-07-25 12:28:11.402843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:38.967 "tick_rate": 2600000000, 00:15:38.967 "poll_groups": [ 00:15:38.967 { 00:15:38.967 "name": "nvmf_tgt_poll_group_000", 00:15:38.967 "admin_qpairs": 0, 00:15:38.967 "io_qpairs": 0, 00:15:38.967 "current_admin_qpairs": 0, 00:15:38.967 "current_io_qpairs": 0, 00:15:38.967 "pending_bdev_io": 0, 00:15:38.967 "completed_nvme_io": 0, 00:15:38.967 "transports": [] 00:15:38.967 }, 00:15:38.967 { 00:15:38.967 "name": "nvmf_tgt_poll_group_001", 00:15:38.967 "admin_qpairs": 0, 00:15:38.967 "io_qpairs": 0, 00:15:38.967 "current_admin_qpairs": 0, 00:15:38.967 "current_io_qpairs": 0, 00:15:38.967 "pending_bdev_io": 0, 00:15:38.967 "completed_nvme_io": 0, 00:15:38.967 "transports": [] 00:15:38.967 }, 00:15:38.967 { 00:15:38.967 "name": "nvmf_tgt_poll_group_002", 00:15:38.967 "admin_qpairs": 0, 00:15:38.967 "io_qpairs": 0, 00:15:38.967 "current_admin_qpairs": 0, 00:15:38.967 "current_io_qpairs": 0, 00:15:38.967 "pending_bdev_io": 0, 00:15:38.967 "completed_nvme_io": 0, 00:15:38.967 "transports": [] 00:15:38.967 }, 00:15:38.967 { 00:15:38.967 "name": "nvmf_tgt_poll_group_003", 00:15:38.967 "admin_qpairs": 0, 00:15:38.967 "io_qpairs": 0, 00:15:38.967 "current_admin_qpairs": 0, 00:15:38.967 "current_io_qpairs": 0, 00:15:38.967 "pending_bdev_io": 0, 00:15:38.967 "completed_nvme_io": 0, 00:15:38.967 "transports": [] 00:15:38.967 } 00:15:38.967 ] 00:15:38.967 }' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.967 [2024-07-25 12:28:12.275050] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:38.967 "tick_rate": 2600000000, 00:15:38.967 "poll_groups": [ 00:15:38.967 { 00:15:38.967 "name": "nvmf_tgt_poll_group_000", 00:15:38.967 "admin_qpairs": 0, 00:15:38.967 "io_qpairs": 0, 00:15:38.967 "current_admin_qpairs": 0, 00:15:38.967 "current_io_qpairs": 0, 00:15:38.967 "pending_bdev_io": 0, 00:15:38.967 "completed_nvme_io": 0, 00:15:38.967 "transports": [ 00:15:38.967 { 00:15:38.967 "trtype": "TCP" 00:15:38.967 } 00:15:38.967 ] 00:15:38.967 }, 00:15:38.967 { 00:15:38.967 "name": "nvmf_tgt_poll_group_001", 00:15:38.967 "admin_qpairs": 0, 00:15:38.967 "io_qpairs": 0, 00:15:38.967 "current_admin_qpairs": 0, 00:15:38.967 "current_io_qpairs": 0, 00:15:38.967 "pending_bdev_io": 0, 00:15:38.967 "completed_nvme_io": 0, 00:15:38.967 "transports": [ 00:15:38.967 { 00:15:38.967 "trtype": "TCP" 00:15:38.967 } 00:15:38.967 ] 00:15:38.967 }, 00:15:38.967 { 00:15:38.967 "name": "nvmf_tgt_poll_group_002", 00:15:38.967 "admin_qpairs": 0, 00:15:38.967 "io_qpairs": 0, 00:15:38.967 "current_admin_qpairs": 0, 00:15:38.967 "current_io_qpairs": 0, 00:15:38.967 "pending_bdev_io": 0, 00:15:38.967 "completed_nvme_io": 0, 00:15:38.967 "transports": [ 00:15:38.967 { 00:15:38.967 "trtype": "TCP" 00:15:38.967 } 00:15:38.967 ] 00:15:38.967 }, 00:15:38.967 { 00:15:38.967 "name": "nvmf_tgt_poll_group_003", 00:15:38.967 "admin_qpairs": 0, 00:15:38.967 "io_qpairs": 0, 00:15:38.967 "current_admin_qpairs": 0, 00:15:38.967 "current_io_qpairs": 0, 00:15:38.967 "pending_bdev_io": 0, 00:15:38.967 "completed_nvme_io": 0, 00:15:38.967 "transports": [ 00:15:38.967 { 00:15:38.967 "trtype": "TCP" 00:15:38.967 } 00:15:38.967 ] 00:15:38.967 } 00:15:38.967 ] 00:15:38.967 }' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:38.967 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.228 Malloc1 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.228 [2024-07-25 12:28:12.473337] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:39.228 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:15:39.229 [2024-07-25 12:28:12.510259] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a' 00:15:39.229 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:39.229 could not add new controller: failed to write to nvme-fabrics device 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.229 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:40.612 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:40.612 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:40.612 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:40.612 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:40.612 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:43.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:43.153 [2024-07-25 12:28:16.157879] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a' 00:15:43.153 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:43.153 could not add new controller: failed to write to nvme-fabrics device 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.153 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:44.536 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:44.536 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:44.536 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:44.536 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:44.536 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:46.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:46.445 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.705 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.705 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.705 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.705 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.706 [2024-07-25 12:28:19.899367] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.706 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:48.088 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:48.088 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:48.088 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.088 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:48.088 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:49.998 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:49.998 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:49.998 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:49.998 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:49.998 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:49.998 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:49.998 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 [2024-07-25 12:28:23.538491] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.259 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:52.185 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:52.185 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:52.185 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.185 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:52.185 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:54.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.096 [2024-07-25 12:28:27.302658] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.096 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.480 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:55.480 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:55.480 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:55.480 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:55.480 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:57.389 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:57.389 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:57.389 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:57.389 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:57.389 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.389 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:57.389 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 [2024-07-25 12:28:30.935333] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.649 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:59.558 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:59.558 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:59.558 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.558 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:59.558 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:01.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.465 [2024-07-25 12:28:34.695330] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.465 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:02.848 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:02.848 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:02.848 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:02.848 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:02.848 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:05.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.390 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 [2024-07-25 12:28:38.406996] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 [2024-07-25 12:28:38.467141] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 [2024-07-25 12:28:38.531327] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 [2024-07-25 12:28:38.587501] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.391 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.392 [2024-07-25 12:28:38.647706] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:05.392 "tick_rate": 2600000000, 00:16:05.392 "poll_groups": [ 00:16:05.392 { 00:16:05.392 "name": "nvmf_tgt_poll_group_000", 00:16:05.392 "admin_qpairs": 0, 00:16:05.392 "io_qpairs": 224, 00:16:05.392 "current_admin_qpairs": 0, 00:16:05.392 "current_io_qpairs": 0, 00:16:05.392 "pending_bdev_io": 0, 00:16:05.392 "completed_nvme_io": 224, 00:16:05.392 "transports": [ 00:16:05.392 { 00:16:05.392 "trtype": "TCP" 00:16:05.392 } 00:16:05.392 ] 00:16:05.392 }, 00:16:05.392 { 00:16:05.392 "name": "nvmf_tgt_poll_group_001", 00:16:05.392 "admin_qpairs": 1, 00:16:05.392 "io_qpairs": 223, 00:16:05.392 "current_admin_qpairs": 0, 00:16:05.392 "current_io_qpairs": 0, 00:16:05.392 "pending_bdev_io": 0, 00:16:05.392 "completed_nvme_io": 276, 00:16:05.392 "transports": [ 00:16:05.392 { 00:16:05.392 "trtype": "TCP" 00:16:05.392 } 00:16:05.392 ] 00:16:05.392 }, 00:16:05.392 { 00:16:05.392 "name": "nvmf_tgt_poll_group_002", 00:16:05.392 "admin_qpairs": 6, 00:16:05.392 "io_qpairs": 218, 00:16:05.392 "current_admin_qpairs": 0, 00:16:05.392 "current_io_qpairs": 0, 00:16:05.392 "pending_bdev_io": 0, 00:16:05.392 "completed_nvme_io": 513, 00:16:05.392 "transports": [ 00:16:05.392 { 00:16:05.392 "trtype": "TCP" 00:16:05.392 } 00:16:05.392 ] 00:16:05.392 }, 00:16:05.392 { 00:16:05.392 "name": "nvmf_tgt_poll_group_003", 00:16:05.392 "admin_qpairs": 0, 00:16:05.392 "io_qpairs": 224, 00:16:05.392 "current_admin_qpairs": 0, 00:16:05.392 "current_io_qpairs": 0, 00:16:05.392 "pending_bdev_io": 0, 00:16:05.392 "completed_nvme_io": 226, 00:16:05.392 "transports": [ 00:16:05.392 { 00:16:05.392 "trtype": "TCP" 00:16:05.392 } 00:16:05.392 ] 00:16:05.392 } 00:16:05.392 ] 00:16:05.392 }' 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:05.392 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:16:05.654 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.654 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:16:05.654 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.654 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.654 rmmod nvme_tcp 00:16:05.654 rmmod nvme_fabrics 00:16:05.654 rmmod nvme_keyring 00:16:05.654 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.654 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:16:05.654 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:16:05.654 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 371580 ']' 00:16:05.654 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 371580 00:16:05.654 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 371580 ']' 00:16:05.654 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 371580 00:16:05.655 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:16:05.655 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.655 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 371580 00:16:05.655 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:05.655 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:05.655 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 371580' 00:16:05.655 killing process with pid 371580 00:16:05.655 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 371580 00:16:05.655 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 371580 00:16:05.915 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:05.915 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:05.915 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:05.915 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.915 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.915 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.915 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.915 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.829 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.829 00:16:07.829 real 0m38.577s 00:16:07.829 user 1m52.721s 00:16:07.829 sys 0m8.084s 00:16:07.829 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.829 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.829 ************************************ 00:16:07.829 END TEST nvmf_rpc 00:16:07.829 ************************************ 00:16:07.829 12:28:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:07.829 12:28:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:07.829 12:28:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:07.829 12:28:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.829 12:28:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.090 ************************************ 00:16:08.090 START TEST nvmf_invalid 00:16:08.090 ************************************ 00:16:08.090 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:08.090 * Looking for test storage... 00:16:08.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.090 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.090 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:08.090 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.090 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.090 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.090 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.091 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:16.235 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:16.235 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:16.235 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:16.235 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.235 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.236 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:16.236 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.236 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.236 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.236 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:16.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:16:16.236 00:16:16.236 --- 10.0.0.2 ping statistics --- 00:16:16.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.236 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:16:16.236 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:16:16.496 00:16:16.496 --- 10.0.0.1 ping statistics --- 00:16:16.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.496 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=380581 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 380581 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 380581 ']' 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.496 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:16.496 [2024-07-25 12:28:49.757313] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:16:16.496 [2024-07-25 12:28:49.757398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.496 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.496 [2024-07-25 12:28:49.851095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.760 [2024-07-25 12:28:49.945219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.760 [2024-07-25 12:28:49.945277] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.760 [2024-07-25 12:28:49.945284] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.760 [2024-07-25 12:28:49.945291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.760 [2024-07-25 12:28:49.945297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.760 [2024-07-25 12:28:49.945433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.760 [2024-07-25 12:28:49.945608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.760 [2024-07-25 12:28:49.945681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.760 [2024-07-25 12:28:49.945680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.331 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.331 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:16:17.331 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.331 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.331 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:17.331 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.331 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:17.331 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7588 00:16:17.593 [2024-07-25 12:28:50.867195] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:17.593 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:17.593 { 00:16:17.593 "nqn": "nqn.2016-06.io.spdk:cnode7588", 00:16:17.593 "tgt_name": "foobar", 00:16:17.593 "method": "nvmf_create_subsystem", 00:16:17.593 "req_id": 1 00:16:17.593 } 00:16:17.593 Got JSON-RPC error response 00:16:17.593 response: 00:16:17.593 { 00:16:17.593 "code": -32603, 00:16:17.593 "message": "Unable to find target foobar" 00:16:17.593 }' 00:16:17.593 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:17.593 { 00:16:17.593 "nqn": "nqn.2016-06.io.spdk:cnode7588", 00:16:17.593 "tgt_name": "foobar", 00:16:17.593 "method": "nvmf_create_subsystem", 00:16:17.593 "req_id": 1 00:16:17.593 } 00:16:17.593 Got JSON-RPC error response 00:16:17.593 response: 00:16:17.593 { 00:16:17.593 "code": -32603, 00:16:17.593 "message": "Unable to find target foobar" 00:16:17.593 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:17.593 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:17.593 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15188 00:16:17.854 [2024-07-25 12:28:51.104088] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15188: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:17.854 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:17.854 { 00:16:17.854 "nqn": "nqn.2016-06.io.spdk:cnode15188", 00:16:17.854 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:17.854 "method": "nvmf_create_subsystem", 00:16:17.854 "req_id": 1 00:16:17.854 } 00:16:17.854 Got JSON-RPC error response 00:16:17.854 response: 00:16:17.854 { 00:16:17.854 "code": -32602, 00:16:17.854 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:17.854 }' 00:16:17.854 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:17.854 { 00:16:17.854 "nqn": "nqn.2016-06.io.spdk:cnode15188", 00:16:17.854 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:17.854 "method": "nvmf_create_subsystem", 00:16:17.854 "req_id": 1 00:16:17.854 } 00:16:17.854 Got JSON-RPC error response 00:16:17.854 response: 00:16:17.854 { 00:16:17.854 "code": -32602, 00:16:17.854 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:17.855 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:17.855 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:17.855 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27837 00:16:18.117 [2024-07-25 12:28:51.336915] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27837: invalid model number 'SPDK_Controller' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:18.117 { 00:16:18.117 "nqn": "nqn.2016-06.io.spdk:cnode27837", 00:16:18.117 "model_number": "SPDK_Controller\u001f", 00:16:18.117 "method": "nvmf_create_subsystem", 00:16:18.117 "req_id": 1 00:16:18.117 } 00:16:18.117 Got JSON-RPC error response 00:16:18.117 response: 00:16:18.117 { 00:16:18.117 "code": -32602, 00:16:18.117 "message": "Invalid MN SPDK_Controller\u001f" 00:16:18.117 }' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:18.117 { 00:16:18.117 "nqn": "nqn.2016-06.io.spdk:cnode27837", 00:16:18.117 "model_number": "SPDK_Controller\u001f", 00:16:18.117 "method": "nvmf_create_subsystem", 00:16:18.117 "req_id": 1 00:16:18.117 } 00:16:18.117 Got JSON-RPC error response 00:16:18.117 response: 00:16:18.117 { 00:16:18.117 "code": -32602, 00:16:18.117 "message": "Invalid MN SPDK_Controller\u001f" 00:16:18.117 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:18.117 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.118 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ < == \- ]] 00:16:18.380 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'OLjDE'\''33~3~e<)dh,0ssyMoK4g' 00:16:18.906 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '#'\''|37e|&y]Q\9>OLjDE'\''33~3~e<)dh,0ssyMoK4g' nqn.2016-06.io.spdk:cnode7339 00:16:18.906 [2024-07-25 12:28:52.264451] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7339: invalid model number '#'|37e|&y]Q\9>OLjDE'33~3~e<)dh,0ssyMoK4g' 00:16:18.906 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:18.906 { 00:16:18.906 "nqn": "nqn.2016-06.io.spdk:cnode7339", 00:16:18.906 "model_number": "#'\''|37e|&y]Q\\9>O\u007fLjDE'\''33~3~e<)dh,0ssyMoK4g", 00:16:18.906 "method": "nvmf_create_subsystem", 00:16:18.906 "req_id": 1 00:16:18.906 } 00:16:18.906 Got JSON-RPC error response 00:16:18.906 response: 00:16:18.906 { 00:16:18.906 "code": -32602, 00:16:18.906 "message": "Invalid MN #'\''|37e|&y]Q\\9>O\u007fLjDE'\''33~3~e<)dh,0ssyMoK4g" 00:16:18.906 }' 00:16:18.906 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:18.906 { 00:16:18.906 "nqn": "nqn.2016-06.io.spdk:cnode7339", 00:16:18.906 "model_number": "#'|37e|&y]Q\\9>O\u007fLjDE'33~3~e<)dh,0ssyMoK4g", 00:16:18.906 "method": "nvmf_create_subsystem", 00:16:18.906 "req_id": 1 00:16:18.906 } 00:16:18.906 Got JSON-RPC error response 00:16:18.906 response: 00:16:18.906 { 00:16:18.906 "code": -32602, 00:16:18.906 "message": "Invalid MN #'|37e|&y]Q\\9>O\u007fLjDE'33~3~e<)dh,0ssyMoK4g" 00:16:18.906 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:18.906 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:19.167 [2024-07-25 12:28:52.493434] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.167 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:19.428 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:19.428 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:19.428 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:19.428 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:19.428 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:19.689 [2024-07-25 12:28:52.948030] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:19.689 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:19.689 { 00:16:19.689 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:19.689 "listen_address": { 00:16:19.689 "trtype": "tcp", 00:16:19.689 "traddr": "", 00:16:19.689 "trsvcid": "4421" 00:16:19.689 }, 00:16:19.689 "method": "nvmf_subsystem_remove_listener", 00:16:19.689 "req_id": 1 00:16:19.689 } 00:16:19.689 Got JSON-RPC error response 00:16:19.689 response: 00:16:19.689 { 00:16:19.689 "code": -32602, 00:16:19.689 "message": "Invalid parameters" 00:16:19.689 }' 00:16:19.689 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:19.689 { 00:16:19.689 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:19.689 "listen_address": { 00:16:19.689 "trtype": "tcp", 00:16:19.689 "traddr": "", 00:16:19.689 "trsvcid": "4421" 00:16:19.689 }, 00:16:19.689 "method": "nvmf_subsystem_remove_listener", 00:16:19.689 "req_id": 1 00:16:19.689 } 00:16:19.689 Got JSON-RPC error response 00:16:19.689 response: 00:16:19.689 { 00:16:19.689 "code": -32602, 00:16:19.689 "message": "Invalid parameters" 00:16:19.689 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:19.689 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31342 -i 0 00:16:19.950 [2024-07-25 12:28:53.180811] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31342: invalid cntlid range [0-65519] 00:16:19.950 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:19.950 { 00:16:19.950 "nqn": "nqn.2016-06.io.spdk:cnode31342", 00:16:19.950 "min_cntlid": 0, 00:16:19.950 "method": "nvmf_create_subsystem", 00:16:19.950 "req_id": 1 00:16:19.950 } 00:16:19.950 Got JSON-RPC error response 00:16:19.950 response: 00:16:19.950 { 00:16:19.950 "code": -32602, 00:16:19.950 "message": "Invalid cntlid range [0-65519]" 00:16:19.950 }' 00:16:19.950 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:19.950 { 00:16:19.950 "nqn": "nqn.2016-06.io.spdk:cnode31342", 00:16:19.950 "min_cntlid": 0, 00:16:19.950 "method": "nvmf_create_subsystem", 00:16:19.950 "req_id": 1 00:16:19.950 } 00:16:19.950 Got JSON-RPC error response 00:16:19.950 response: 00:16:19.950 { 00:16:19.950 "code": -32602, 00:16:19.950 "message": "Invalid cntlid range [0-65519]" 00:16:19.950 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:19.950 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16106 -i 65520 00:16:20.210 [2024-07-25 12:28:53.413625] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16106: invalid cntlid range [65520-65519] 00:16:20.210 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:20.210 { 00:16:20.210 "nqn": "nqn.2016-06.io.spdk:cnode16106", 00:16:20.210 "min_cntlid": 65520, 00:16:20.210 "method": "nvmf_create_subsystem", 00:16:20.210 "req_id": 1 00:16:20.210 } 00:16:20.210 Got JSON-RPC error response 00:16:20.210 response: 00:16:20.210 { 00:16:20.210 "code": -32602, 00:16:20.210 "message": "Invalid cntlid range [65520-65519]" 00:16:20.210 }' 00:16:20.210 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:20.210 { 00:16:20.210 "nqn": "nqn.2016-06.io.spdk:cnode16106", 00:16:20.210 "min_cntlid": 65520, 00:16:20.210 "method": "nvmf_create_subsystem", 00:16:20.210 "req_id": 1 00:16:20.210 } 00:16:20.210 Got JSON-RPC error response 00:16:20.210 response: 00:16:20.210 { 00:16:20.210 "code": -32602, 00:16:20.210 "message": "Invalid cntlid range [65520-65519]" 00:16:20.210 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:20.210 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11188 -I 0 00:16:20.470 [2024-07-25 12:28:53.646492] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11188: invalid cntlid range [1-0] 00:16:20.470 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:20.470 { 00:16:20.470 "nqn": "nqn.2016-06.io.spdk:cnode11188", 00:16:20.470 "max_cntlid": 0, 00:16:20.470 "method": "nvmf_create_subsystem", 00:16:20.470 "req_id": 1 00:16:20.470 } 00:16:20.470 Got JSON-RPC error response 00:16:20.470 response: 00:16:20.470 { 00:16:20.470 "code": -32602, 00:16:20.470 "message": "Invalid cntlid range [1-0]" 00:16:20.470 }' 00:16:20.470 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:20.470 { 00:16:20.470 "nqn": "nqn.2016-06.io.spdk:cnode11188", 00:16:20.470 "max_cntlid": 0, 00:16:20.470 "method": "nvmf_create_subsystem", 00:16:20.470 "req_id": 1 00:16:20.470 } 00:16:20.470 Got JSON-RPC error response 00:16:20.470 response: 00:16:20.470 { 00:16:20.470 "code": -32602, 00:16:20.470 "message": "Invalid cntlid range [1-0]" 00:16:20.470 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:20.470 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20532 -I 65520 00:16:20.470 [2024-07-25 12:28:53.879278] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20532: invalid cntlid range [1-65520] 00:16:20.731 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:20.731 { 00:16:20.731 "nqn": "nqn.2016-06.io.spdk:cnode20532", 00:16:20.731 "max_cntlid": 65520, 00:16:20.731 "method": "nvmf_create_subsystem", 00:16:20.731 "req_id": 1 00:16:20.731 } 00:16:20.731 Got JSON-RPC error response 00:16:20.731 response: 00:16:20.731 { 00:16:20.731 "code": -32602, 00:16:20.731 "message": "Invalid cntlid range [1-65520]" 00:16:20.731 }' 00:16:20.731 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:20.731 { 00:16:20.731 "nqn": "nqn.2016-06.io.spdk:cnode20532", 00:16:20.731 "max_cntlid": 65520, 00:16:20.731 "method": "nvmf_create_subsystem", 00:16:20.731 "req_id": 1 00:16:20.731 } 00:16:20.731 Got JSON-RPC error response 00:16:20.731 response: 00:16:20.731 { 00:16:20.731 "code": -32602, 00:16:20.731 "message": "Invalid cntlid range [1-65520]" 00:16:20.731 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:20.731 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11955 -i 6 -I 5 00:16:20.731 [2024-07-25 12:28:54.096005] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11955: invalid cntlid range [6-5] 00:16:20.731 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:20.731 { 00:16:20.731 "nqn": "nqn.2016-06.io.spdk:cnode11955", 00:16:20.731 "min_cntlid": 6, 00:16:20.731 "max_cntlid": 5, 00:16:20.731 "method": "nvmf_create_subsystem", 00:16:20.731 "req_id": 1 00:16:20.731 } 00:16:20.731 Got JSON-RPC error response 00:16:20.731 response: 00:16:20.731 { 00:16:20.731 "code": -32602, 00:16:20.731 "message": "Invalid cntlid range [6-5]" 00:16:20.731 }' 00:16:20.731 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:20.731 { 00:16:20.731 "nqn": "nqn.2016-06.io.spdk:cnode11955", 00:16:20.731 "min_cntlid": 6, 00:16:20.731 "max_cntlid": 5, 00:16:20.731 "method": "nvmf_create_subsystem", 00:16:20.731 "req_id": 1 00:16:20.731 } 00:16:20.731 Got JSON-RPC error response 00:16:20.731 response: 00:16:20.731 { 00:16:20.731 "code": -32602, 00:16:20.731 "message": "Invalid cntlid range [6-5]" 00:16:20.731 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:20.731 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:20.991 { 00:16:20.991 "name": "foobar", 00:16:20.991 "method": "nvmf_delete_target", 00:16:20.991 "req_id": 1 00:16:20.991 } 00:16:20.991 Got JSON-RPC error response 00:16:20.991 response: 00:16:20.991 { 00:16:20.991 "code": -32602, 00:16:20.991 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:20.991 }' 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:20.991 { 00:16:20.991 "name": "foobar", 00:16:20.991 "method": "nvmf_delete_target", 00:16:20.991 "req_id": 1 00:16:20.991 } 00:16:20.991 Got JSON-RPC error response 00:16:20.991 response: 00:16:20.991 { 00:16:20.991 "code": -32602, 00:16:20.991 "message": "The specified target doesn't exist, cannot delete it." 00:16:20.991 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:20.991 rmmod nvme_tcp 00:16:20.991 rmmod nvme_fabrics 00:16:20.991 rmmod nvme_keyring 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 380581 ']' 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 380581 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 380581 ']' 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 380581 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 380581 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 380581' 00:16:20.991 killing process with pid 380581 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 380581 00:16:20.991 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 380581 00:16:21.251 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.251 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.251 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.251 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.251 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.251 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.251 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.251 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:23.794 00:16:23.794 real 0m15.344s 00:16:23.794 user 0m23.396s 00:16:23.794 sys 0m7.300s 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:23.794 ************************************ 00:16:23.794 END TEST nvmf_invalid 00:16:23.794 ************************************ 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:23.794 ************************************ 00:16:23.794 START TEST nvmf_connect_stress 00:16:23.794 ************************************ 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:23.794 * Looking for test storage... 00:16:23.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:23.794 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:31.996 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:31.996 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:31.996 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:31.996 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.996 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.997 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.997 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:31.997 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.997 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.997 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:31.997 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.997 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.997 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:31.997 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:31.997 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.997 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:31.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:16:31.997 00:16:31.997 --- 10.0.0.2 ping statistics --- 00:16:31.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.997 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:16:31.997 00:16:31.997 --- 10.0.0.1 ping statistics --- 00:16:31.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.997 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=385881 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 385881 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 385881 ']' 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.997 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:31.997 [2024-07-25 12:29:05.341435] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:16:31.997 [2024-07-25 12:29:05.341499] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.997 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.257 [2024-07-25 12:29:05.433905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:32.257 [2024-07-25 12:29:05.541624] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.257 [2024-07-25 12:29:05.541695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.257 [2024-07-25 12:29:05.541707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.257 [2024-07-25 12:29:05.541717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.257 [2024-07-25 12:29:05.541725] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.257 [2024-07-25 12:29:05.541889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.257 [2024-07-25 12:29:05.542037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.257 [2024-07-25 12:29:05.542039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.827 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.827 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:16:32.827 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:32.827 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:32.827 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.089 [2024-07-25 12:29:06.283664] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.089 [2024-07-25 12:29:06.328466] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.089 NULL1 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=386187 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.089 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.090 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.661 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.661 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:33.661 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.661 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.661 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.921 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.921 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:33.921 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.921 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.921 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.181 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.181 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:34.181 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.181 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.181 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.441 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.441 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:34.441 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.441 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.441 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.701 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.701 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:34.701 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.701 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.701 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.270 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.270 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:35.270 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.270 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.270 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.529 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.529 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:35.529 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.529 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.529 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.788 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.788 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:35.788 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.788 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.788 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.048 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.048 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:36.048 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.048 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.048 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.309 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.309 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:36.309 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.309 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.309 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.877 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.877 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:36.877 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.877 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.877 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.137 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.137 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:37.137 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.137 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.137 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.397 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.397 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:37.397 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.397 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.397 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.657 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.657 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:37.657 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.657 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.657 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.917 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.917 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:37.917 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.917 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.917 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.488 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.488 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:38.488 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.488 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.488 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.747 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.747 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:38.747 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.747 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.747 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.007 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.007 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:39.007 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.007 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.007 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.267 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.267 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:39.267 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.267 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.267 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.838 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.838 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:39.838 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.838 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.838 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.097 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.097 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:40.097 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.097 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.097 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.357 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.357 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:40.357 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.357 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.357 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.616 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.616 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:40.616 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.616 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.616 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.876 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.876 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:40.876 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.876 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.876 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.447 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.447 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:41.447 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.447 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.447 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.707 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.707 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:41.707 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.707 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.707 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.968 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.968 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:41.968 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.968 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.968 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.228 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.228 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:42.228 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.228 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.228 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.488 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.488 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:42.488 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.488 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.488 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.059 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.059 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:43.059 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.059 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.059 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.059 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 386187 00:16:43.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (386187) - No such process 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 386187 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.320 rmmod nvme_tcp 00:16:43.320 rmmod nvme_fabrics 00:16:43.320 rmmod nvme_keyring 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 385881 ']' 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 385881 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 385881 ']' 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 385881 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 385881 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 385881' 00:16:43.320 killing process with pid 385881 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 385881 00:16:43.320 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 385881 00:16:43.581 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.581 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.581 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.581 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.581 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.581 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.581 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.581 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.124 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.124 00:16:46.124 real 0m22.299s 00:16:46.124 user 0m42.285s 00:16:46.124 sys 0m10.318s 00:16:46.124 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.124 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.124 ************************************ 00:16:46.124 END TEST nvmf_connect_stress 00:16:46.124 ************************************ 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.124 ************************************ 00:16:46.124 START TEST nvmf_fused_ordering 00:16:46.124 ************************************ 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:46.124 * Looking for test storage... 00:16:46.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:46.124 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:54.264 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.264 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:54.265 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:54.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:54.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:54.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:16:54.265 00:16:54.265 --- 10.0.0.2 ping statistics --- 00:16:54.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.265 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:16:54.265 00:16:54.265 --- 10.0.0.1 ping statistics --- 00:16:54.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.265 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=392226 00:16:54.265 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 392226 00:16:54.527 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:54.527 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 392226 ']' 00:16:54.527 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.527 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.527 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.527 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.527 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:54.527 [2024-07-25 12:29:27.734402] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:16:54.527 [2024-07-25 12:29:27.734463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.527 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.527 [2024-07-25 12:29:27.829317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.527 [2024-07-25 12:29:27.936314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.527 [2024-07-25 12:29:27.936381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.527 [2024-07-25 12:29:27.936393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.527 [2024-07-25 12:29:27.936402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.527 [2024-07-25 12:29:27.936410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.527 [2024-07-25 12:29:27.936442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:55.468 [2024-07-25 12:29:28.672478] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:55.468 [2024-07-25 12:29:28.696758] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:55.468 NULL1 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.468 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:55.468 [2024-07-25 12:29:28.767480] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:16:55.468 [2024-07-25 12:29:28.767524] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392458 ] 00:16:55.468 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.038 Attached to nqn.2016-06.io.spdk:cnode1 00:16:56.038 Namespace ID: 1 size: 1GB 00:16:56.038 fused_ordering(0) 00:16:56.038 fused_ordering(1) 00:16:56.038 fused_ordering(2) 00:16:56.038 fused_ordering(3) 00:16:56.038 fused_ordering(4) 00:16:56.038 fused_ordering(5) 00:16:56.038 fused_ordering(6) 00:16:56.038 fused_ordering(7) 00:16:56.038 fused_ordering(8) 00:16:56.038 fused_ordering(9) 00:16:56.038 fused_ordering(10) 00:16:56.038 fused_ordering(11) 00:16:56.038 fused_ordering(12) 00:16:56.038 fused_ordering(13) 00:16:56.038 fused_ordering(14) 00:16:56.038 fused_ordering(15) 00:16:56.038 fused_ordering(16) 00:16:56.038 fused_ordering(17) 00:16:56.038 fused_ordering(18) 00:16:56.038 fused_ordering(19) 00:16:56.038 fused_ordering(20) 00:16:56.038 fused_ordering(21) 00:16:56.038 fused_ordering(22) 00:16:56.038 fused_ordering(23) 00:16:56.038 fused_ordering(24) 00:16:56.038 fused_ordering(25) 00:16:56.038 fused_ordering(26) 00:16:56.038 fused_ordering(27) 00:16:56.038 fused_ordering(28) 00:16:56.038 fused_ordering(29) 00:16:56.038 fused_ordering(30) 00:16:56.038 fused_ordering(31) 00:16:56.038 fused_ordering(32) 00:16:56.038 fused_ordering(33) 00:16:56.038 fused_ordering(34) 00:16:56.038 fused_ordering(35) 00:16:56.038 fused_ordering(36) 00:16:56.038 fused_ordering(37) 00:16:56.038 fused_ordering(38) 00:16:56.038 fused_ordering(39) 00:16:56.038 fused_ordering(40) 00:16:56.038 fused_ordering(41) 00:16:56.038 fused_ordering(42) 00:16:56.038 fused_ordering(43) 00:16:56.038 fused_ordering(44) 00:16:56.038 fused_ordering(45) 00:16:56.038 fused_ordering(46) 00:16:56.038 fused_ordering(47) 00:16:56.038 fused_ordering(48) 00:16:56.038 fused_ordering(49) 00:16:56.038 fused_ordering(50) 00:16:56.038 fused_ordering(51) 00:16:56.038 fused_ordering(52) 00:16:56.038 fused_ordering(53) 00:16:56.038 fused_ordering(54) 00:16:56.038 fused_ordering(55) 00:16:56.038 fused_ordering(56) 00:16:56.038 fused_ordering(57) 00:16:56.038 fused_ordering(58) 00:16:56.038 fused_ordering(59) 00:16:56.038 fused_ordering(60) 00:16:56.038 fused_ordering(61) 00:16:56.038 fused_ordering(62) 00:16:56.038 fused_ordering(63) 00:16:56.038 fused_ordering(64) 00:16:56.038 fused_ordering(65) 00:16:56.038 fused_ordering(66) 00:16:56.038 fused_ordering(67) 00:16:56.038 fused_ordering(68) 00:16:56.038 fused_ordering(69) 00:16:56.038 fused_ordering(70) 00:16:56.038 fused_ordering(71) 00:16:56.038 fused_ordering(72) 00:16:56.038 fused_ordering(73) 00:16:56.038 fused_ordering(74) 00:16:56.038 fused_ordering(75) 00:16:56.038 fused_ordering(76) 00:16:56.038 fused_ordering(77) 00:16:56.038 fused_ordering(78) 00:16:56.038 fused_ordering(79) 00:16:56.038 fused_ordering(80) 00:16:56.038 fused_ordering(81) 00:16:56.038 fused_ordering(82) 00:16:56.038 fused_ordering(83) 00:16:56.038 fused_ordering(84) 00:16:56.038 fused_ordering(85) 00:16:56.038 fused_ordering(86) 00:16:56.038 fused_ordering(87) 00:16:56.038 fused_ordering(88) 00:16:56.038 fused_ordering(89) 00:16:56.038 fused_ordering(90) 00:16:56.038 fused_ordering(91) 00:16:56.039 fused_ordering(92) 00:16:56.039 fused_ordering(93) 00:16:56.039 fused_ordering(94) 00:16:56.039 fused_ordering(95) 00:16:56.039 fused_ordering(96) 00:16:56.039 fused_ordering(97) 00:16:56.039 fused_ordering(98) 00:16:56.039 fused_ordering(99) 00:16:56.039 fused_ordering(100) 00:16:56.039 fused_ordering(101) 00:16:56.039 fused_ordering(102) 00:16:56.039 fused_ordering(103) 00:16:56.039 fused_ordering(104) 00:16:56.039 fused_ordering(105) 00:16:56.039 fused_ordering(106) 00:16:56.039 fused_ordering(107) 00:16:56.039 fused_ordering(108) 00:16:56.039 fused_ordering(109) 00:16:56.039 fused_ordering(110) 00:16:56.039 fused_ordering(111) 00:16:56.039 fused_ordering(112) 00:16:56.039 fused_ordering(113) 00:16:56.039 fused_ordering(114) 00:16:56.039 fused_ordering(115) 00:16:56.039 fused_ordering(116) 00:16:56.039 fused_ordering(117) 00:16:56.039 fused_ordering(118) 00:16:56.039 fused_ordering(119) 00:16:56.039 fused_ordering(120) 00:16:56.039 fused_ordering(121) 00:16:56.039 fused_ordering(122) 00:16:56.039 fused_ordering(123) 00:16:56.039 fused_ordering(124) 00:16:56.039 fused_ordering(125) 00:16:56.039 fused_ordering(126) 00:16:56.039 fused_ordering(127) 00:16:56.039 fused_ordering(128) 00:16:56.039 fused_ordering(129) 00:16:56.039 fused_ordering(130) 00:16:56.039 fused_ordering(131) 00:16:56.039 fused_ordering(132) 00:16:56.039 fused_ordering(133) 00:16:56.039 fused_ordering(134) 00:16:56.039 fused_ordering(135) 00:16:56.039 fused_ordering(136) 00:16:56.039 fused_ordering(137) 00:16:56.039 fused_ordering(138) 00:16:56.039 fused_ordering(139) 00:16:56.039 fused_ordering(140) 00:16:56.039 fused_ordering(141) 00:16:56.039 fused_ordering(142) 00:16:56.039 fused_ordering(143) 00:16:56.039 fused_ordering(144) 00:16:56.039 fused_ordering(145) 00:16:56.039 fused_ordering(146) 00:16:56.039 fused_ordering(147) 00:16:56.039 fused_ordering(148) 00:16:56.039 fused_ordering(149) 00:16:56.039 fused_ordering(150) 00:16:56.039 fused_ordering(151) 00:16:56.039 fused_ordering(152) 00:16:56.039 fused_ordering(153) 00:16:56.039 fused_ordering(154) 00:16:56.039 fused_ordering(155) 00:16:56.039 fused_ordering(156) 00:16:56.039 fused_ordering(157) 00:16:56.039 fused_ordering(158) 00:16:56.039 fused_ordering(159) 00:16:56.039 fused_ordering(160) 00:16:56.039 fused_ordering(161) 00:16:56.039 fused_ordering(162) 00:16:56.039 fused_ordering(163) 00:16:56.039 fused_ordering(164) 00:16:56.039 fused_ordering(165) 00:16:56.039 fused_ordering(166) 00:16:56.039 fused_ordering(167) 00:16:56.039 fused_ordering(168) 00:16:56.039 fused_ordering(169) 00:16:56.039 fused_ordering(170) 00:16:56.039 fused_ordering(171) 00:16:56.039 fused_ordering(172) 00:16:56.039 fused_ordering(173) 00:16:56.039 fused_ordering(174) 00:16:56.039 fused_ordering(175) 00:16:56.039 fused_ordering(176) 00:16:56.039 fused_ordering(177) 00:16:56.039 fused_ordering(178) 00:16:56.039 fused_ordering(179) 00:16:56.039 fused_ordering(180) 00:16:56.039 fused_ordering(181) 00:16:56.039 fused_ordering(182) 00:16:56.039 fused_ordering(183) 00:16:56.039 fused_ordering(184) 00:16:56.039 fused_ordering(185) 00:16:56.039 fused_ordering(186) 00:16:56.039 fused_ordering(187) 00:16:56.039 fused_ordering(188) 00:16:56.039 fused_ordering(189) 00:16:56.039 fused_ordering(190) 00:16:56.039 fused_ordering(191) 00:16:56.039 fused_ordering(192) 00:16:56.039 fused_ordering(193) 00:16:56.039 fused_ordering(194) 00:16:56.039 fused_ordering(195) 00:16:56.039 fused_ordering(196) 00:16:56.039 fused_ordering(197) 00:16:56.039 fused_ordering(198) 00:16:56.039 fused_ordering(199) 00:16:56.039 fused_ordering(200) 00:16:56.039 fused_ordering(201) 00:16:56.039 fused_ordering(202) 00:16:56.039 fused_ordering(203) 00:16:56.039 fused_ordering(204) 00:16:56.039 fused_ordering(205) 00:16:56.609 fused_ordering(206) 00:16:56.609 fused_ordering(207) 00:16:56.609 fused_ordering(208) 00:16:56.609 fused_ordering(209) 00:16:56.609 fused_ordering(210) 00:16:56.609 fused_ordering(211) 00:16:56.609 fused_ordering(212) 00:16:56.609 fused_ordering(213) 00:16:56.609 fused_ordering(214) 00:16:56.609 fused_ordering(215) 00:16:56.609 fused_ordering(216) 00:16:56.609 fused_ordering(217) 00:16:56.609 fused_ordering(218) 00:16:56.609 fused_ordering(219) 00:16:56.609 fused_ordering(220) 00:16:56.609 fused_ordering(221) 00:16:56.609 fused_ordering(222) 00:16:56.609 fused_ordering(223) 00:16:56.609 fused_ordering(224) 00:16:56.609 fused_ordering(225) 00:16:56.609 fused_ordering(226) 00:16:56.609 fused_ordering(227) 00:16:56.609 fused_ordering(228) 00:16:56.609 fused_ordering(229) 00:16:56.609 fused_ordering(230) 00:16:56.609 fused_ordering(231) 00:16:56.609 fused_ordering(232) 00:16:56.609 fused_ordering(233) 00:16:56.609 fused_ordering(234) 00:16:56.609 fused_ordering(235) 00:16:56.609 fused_ordering(236) 00:16:56.609 fused_ordering(237) 00:16:56.609 fused_ordering(238) 00:16:56.609 fused_ordering(239) 00:16:56.609 fused_ordering(240) 00:16:56.609 fused_ordering(241) 00:16:56.609 fused_ordering(242) 00:16:56.609 fused_ordering(243) 00:16:56.609 fused_ordering(244) 00:16:56.609 fused_ordering(245) 00:16:56.609 fused_ordering(246) 00:16:56.609 fused_ordering(247) 00:16:56.609 fused_ordering(248) 00:16:56.609 fused_ordering(249) 00:16:56.609 fused_ordering(250) 00:16:56.609 fused_ordering(251) 00:16:56.609 fused_ordering(252) 00:16:56.609 fused_ordering(253) 00:16:56.609 fused_ordering(254) 00:16:56.609 fused_ordering(255) 00:16:56.609 fused_ordering(256) 00:16:56.609 fused_ordering(257) 00:16:56.609 fused_ordering(258) 00:16:56.609 fused_ordering(259) 00:16:56.609 fused_ordering(260) 00:16:56.609 fused_ordering(261) 00:16:56.609 fused_ordering(262) 00:16:56.609 fused_ordering(263) 00:16:56.609 fused_ordering(264) 00:16:56.609 fused_ordering(265) 00:16:56.609 fused_ordering(266) 00:16:56.609 fused_ordering(267) 00:16:56.609 fused_ordering(268) 00:16:56.610 fused_ordering(269) 00:16:56.610 fused_ordering(270) 00:16:56.610 fused_ordering(271) 00:16:56.610 fused_ordering(272) 00:16:56.610 fused_ordering(273) 00:16:56.610 fused_ordering(274) 00:16:56.610 fused_ordering(275) 00:16:56.610 fused_ordering(276) 00:16:56.610 fused_ordering(277) 00:16:56.610 fused_ordering(278) 00:16:56.610 fused_ordering(279) 00:16:56.610 fused_ordering(280) 00:16:56.610 fused_ordering(281) 00:16:56.610 fused_ordering(282) 00:16:56.610 fused_ordering(283) 00:16:56.610 fused_ordering(284) 00:16:56.610 fused_ordering(285) 00:16:56.610 fused_ordering(286) 00:16:56.610 fused_ordering(287) 00:16:56.610 fused_ordering(288) 00:16:56.610 fused_ordering(289) 00:16:56.610 fused_ordering(290) 00:16:56.610 fused_ordering(291) 00:16:56.610 fused_ordering(292) 00:16:56.610 fused_ordering(293) 00:16:56.610 fused_ordering(294) 00:16:56.610 fused_ordering(295) 00:16:56.610 fused_ordering(296) 00:16:56.610 fused_ordering(297) 00:16:56.610 fused_ordering(298) 00:16:56.610 fused_ordering(299) 00:16:56.610 fused_ordering(300) 00:16:56.610 fused_ordering(301) 00:16:56.610 fused_ordering(302) 00:16:56.610 fused_ordering(303) 00:16:56.610 fused_ordering(304) 00:16:56.610 fused_ordering(305) 00:16:56.610 fused_ordering(306) 00:16:56.610 fused_ordering(307) 00:16:56.610 fused_ordering(308) 00:16:56.610 fused_ordering(309) 00:16:56.610 fused_ordering(310) 00:16:56.610 fused_ordering(311) 00:16:56.610 fused_ordering(312) 00:16:56.610 fused_ordering(313) 00:16:56.610 fused_ordering(314) 00:16:56.610 fused_ordering(315) 00:16:56.610 fused_ordering(316) 00:16:56.610 fused_ordering(317) 00:16:56.610 fused_ordering(318) 00:16:56.610 fused_ordering(319) 00:16:56.610 fused_ordering(320) 00:16:56.610 fused_ordering(321) 00:16:56.610 fused_ordering(322) 00:16:56.610 fused_ordering(323) 00:16:56.610 fused_ordering(324) 00:16:56.610 fused_ordering(325) 00:16:56.610 fused_ordering(326) 00:16:56.610 fused_ordering(327) 00:16:56.610 fused_ordering(328) 00:16:56.610 fused_ordering(329) 00:16:56.610 fused_ordering(330) 00:16:56.610 fused_ordering(331) 00:16:56.610 fused_ordering(332) 00:16:56.610 fused_ordering(333) 00:16:56.610 fused_ordering(334) 00:16:56.610 fused_ordering(335) 00:16:56.610 fused_ordering(336) 00:16:56.610 fused_ordering(337) 00:16:56.610 fused_ordering(338) 00:16:56.610 fused_ordering(339) 00:16:56.610 fused_ordering(340) 00:16:56.610 fused_ordering(341) 00:16:56.610 fused_ordering(342) 00:16:56.610 fused_ordering(343) 00:16:56.610 fused_ordering(344) 00:16:56.610 fused_ordering(345) 00:16:56.610 fused_ordering(346) 00:16:56.610 fused_ordering(347) 00:16:56.610 fused_ordering(348) 00:16:56.610 fused_ordering(349) 00:16:56.610 fused_ordering(350) 00:16:56.610 fused_ordering(351) 00:16:56.610 fused_ordering(352) 00:16:56.610 fused_ordering(353) 00:16:56.610 fused_ordering(354) 00:16:56.610 fused_ordering(355) 00:16:56.610 fused_ordering(356) 00:16:56.610 fused_ordering(357) 00:16:56.610 fused_ordering(358) 00:16:56.610 fused_ordering(359) 00:16:56.610 fused_ordering(360) 00:16:56.610 fused_ordering(361) 00:16:56.610 fused_ordering(362) 00:16:56.610 fused_ordering(363) 00:16:56.610 fused_ordering(364) 00:16:56.610 fused_ordering(365) 00:16:56.610 fused_ordering(366) 00:16:56.610 fused_ordering(367) 00:16:56.610 fused_ordering(368) 00:16:56.610 fused_ordering(369) 00:16:56.610 fused_ordering(370) 00:16:56.610 fused_ordering(371) 00:16:56.610 fused_ordering(372) 00:16:56.610 fused_ordering(373) 00:16:56.610 fused_ordering(374) 00:16:56.610 fused_ordering(375) 00:16:56.610 fused_ordering(376) 00:16:56.610 fused_ordering(377) 00:16:56.610 fused_ordering(378) 00:16:56.610 fused_ordering(379) 00:16:56.610 fused_ordering(380) 00:16:56.610 fused_ordering(381) 00:16:56.610 fused_ordering(382) 00:16:56.610 fused_ordering(383) 00:16:56.610 fused_ordering(384) 00:16:56.610 fused_ordering(385) 00:16:56.610 fused_ordering(386) 00:16:56.610 fused_ordering(387) 00:16:56.610 fused_ordering(388) 00:16:56.610 fused_ordering(389) 00:16:56.610 fused_ordering(390) 00:16:56.610 fused_ordering(391) 00:16:56.610 fused_ordering(392) 00:16:56.610 fused_ordering(393) 00:16:56.610 fused_ordering(394) 00:16:56.610 fused_ordering(395) 00:16:56.610 fused_ordering(396) 00:16:56.610 fused_ordering(397) 00:16:56.610 fused_ordering(398) 00:16:56.610 fused_ordering(399) 00:16:56.610 fused_ordering(400) 00:16:56.610 fused_ordering(401) 00:16:56.610 fused_ordering(402) 00:16:56.610 fused_ordering(403) 00:16:56.610 fused_ordering(404) 00:16:56.610 fused_ordering(405) 00:16:56.610 fused_ordering(406) 00:16:56.610 fused_ordering(407) 00:16:56.610 fused_ordering(408) 00:16:56.610 fused_ordering(409) 00:16:56.610 fused_ordering(410) 00:16:56.872 fused_ordering(411) 00:16:56.872 fused_ordering(412) 00:16:56.872 fused_ordering(413) 00:16:56.872 fused_ordering(414) 00:16:56.872 fused_ordering(415) 00:16:56.872 fused_ordering(416) 00:16:56.872 fused_ordering(417) 00:16:56.872 fused_ordering(418) 00:16:56.872 fused_ordering(419) 00:16:56.872 fused_ordering(420) 00:16:56.872 fused_ordering(421) 00:16:56.872 fused_ordering(422) 00:16:56.872 fused_ordering(423) 00:16:56.872 fused_ordering(424) 00:16:56.872 fused_ordering(425) 00:16:56.872 fused_ordering(426) 00:16:56.872 fused_ordering(427) 00:16:56.872 fused_ordering(428) 00:16:56.872 fused_ordering(429) 00:16:56.872 fused_ordering(430) 00:16:56.872 fused_ordering(431) 00:16:56.872 fused_ordering(432) 00:16:56.872 fused_ordering(433) 00:16:56.872 fused_ordering(434) 00:16:56.872 fused_ordering(435) 00:16:56.872 fused_ordering(436) 00:16:56.872 fused_ordering(437) 00:16:56.872 fused_ordering(438) 00:16:56.872 fused_ordering(439) 00:16:56.872 fused_ordering(440) 00:16:56.872 fused_ordering(441) 00:16:56.872 fused_ordering(442) 00:16:56.872 fused_ordering(443) 00:16:56.872 fused_ordering(444) 00:16:56.872 fused_ordering(445) 00:16:56.872 fused_ordering(446) 00:16:56.872 fused_ordering(447) 00:16:56.872 fused_ordering(448) 00:16:56.872 fused_ordering(449) 00:16:56.872 fused_ordering(450) 00:16:56.872 fused_ordering(451) 00:16:56.872 fused_ordering(452) 00:16:56.872 fused_ordering(453) 00:16:56.872 fused_ordering(454) 00:16:56.872 fused_ordering(455) 00:16:56.872 fused_ordering(456) 00:16:56.872 fused_ordering(457) 00:16:56.872 fused_ordering(458) 00:16:56.872 fused_ordering(459) 00:16:56.872 fused_ordering(460) 00:16:56.872 fused_ordering(461) 00:16:56.872 fused_ordering(462) 00:16:56.872 fused_ordering(463) 00:16:56.872 fused_ordering(464) 00:16:56.872 fused_ordering(465) 00:16:56.872 fused_ordering(466) 00:16:56.872 fused_ordering(467) 00:16:56.872 fused_ordering(468) 00:16:56.872 fused_ordering(469) 00:16:56.872 fused_ordering(470) 00:16:56.872 fused_ordering(471) 00:16:56.872 fused_ordering(472) 00:16:56.872 fused_ordering(473) 00:16:56.872 fused_ordering(474) 00:16:56.872 fused_ordering(475) 00:16:56.872 fused_ordering(476) 00:16:56.872 fused_ordering(477) 00:16:56.872 fused_ordering(478) 00:16:56.872 fused_ordering(479) 00:16:56.872 fused_ordering(480) 00:16:56.872 fused_ordering(481) 00:16:56.872 fused_ordering(482) 00:16:56.872 fused_ordering(483) 00:16:56.872 fused_ordering(484) 00:16:56.872 fused_ordering(485) 00:16:56.872 fused_ordering(486) 00:16:56.872 fused_ordering(487) 00:16:56.872 fused_ordering(488) 00:16:56.872 fused_ordering(489) 00:16:56.872 fused_ordering(490) 00:16:56.872 fused_ordering(491) 00:16:56.872 fused_ordering(492) 00:16:56.872 fused_ordering(493) 00:16:56.872 fused_ordering(494) 00:16:56.872 fused_ordering(495) 00:16:56.872 fused_ordering(496) 00:16:56.872 fused_ordering(497) 00:16:56.872 fused_ordering(498) 00:16:56.872 fused_ordering(499) 00:16:56.872 fused_ordering(500) 00:16:56.872 fused_ordering(501) 00:16:56.872 fused_ordering(502) 00:16:56.872 fused_ordering(503) 00:16:56.872 fused_ordering(504) 00:16:56.872 fused_ordering(505) 00:16:56.872 fused_ordering(506) 00:16:56.872 fused_ordering(507) 00:16:56.872 fused_ordering(508) 00:16:56.872 fused_ordering(509) 00:16:56.872 fused_ordering(510) 00:16:56.872 fused_ordering(511) 00:16:56.872 fused_ordering(512) 00:16:56.872 fused_ordering(513) 00:16:56.872 fused_ordering(514) 00:16:56.872 fused_ordering(515) 00:16:56.872 fused_ordering(516) 00:16:56.872 fused_ordering(517) 00:16:56.872 fused_ordering(518) 00:16:56.872 fused_ordering(519) 00:16:56.872 fused_ordering(520) 00:16:56.872 fused_ordering(521) 00:16:56.872 fused_ordering(522) 00:16:56.872 fused_ordering(523) 00:16:56.872 fused_ordering(524) 00:16:56.872 fused_ordering(525) 00:16:56.872 fused_ordering(526) 00:16:56.872 fused_ordering(527) 00:16:56.872 fused_ordering(528) 00:16:56.872 fused_ordering(529) 00:16:56.872 fused_ordering(530) 00:16:56.872 fused_ordering(531) 00:16:56.872 fused_ordering(532) 00:16:56.872 fused_ordering(533) 00:16:56.872 fused_ordering(534) 00:16:56.872 fused_ordering(535) 00:16:56.872 fused_ordering(536) 00:16:56.872 fused_ordering(537) 00:16:56.872 fused_ordering(538) 00:16:56.872 fused_ordering(539) 00:16:56.872 fused_ordering(540) 00:16:56.872 fused_ordering(541) 00:16:56.872 fused_ordering(542) 00:16:56.872 fused_ordering(543) 00:16:56.872 fused_ordering(544) 00:16:56.872 fused_ordering(545) 00:16:56.872 fused_ordering(546) 00:16:56.872 fused_ordering(547) 00:16:56.872 fused_ordering(548) 00:16:56.872 fused_ordering(549) 00:16:56.872 fused_ordering(550) 00:16:56.872 fused_ordering(551) 00:16:56.872 fused_ordering(552) 00:16:56.872 fused_ordering(553) 00:16:56.872 fused_ordering(554) 00:16:56.872 fused_ordering(555) 00:16:56.872 fused_ordering(556) 00:16:56.872 fused_ordering(557) 00:16:56.872 fused_ordering(558) 00:16:56.872 fused_ordering(559) 00:16:56.872 fused_ordering(560) 00:16:56.872 fused_ordering(561) 00:16:56.872 fused_ordering(562) 00:16:56.872 fused_ordering(563) 00:16:56.872 fused_ordering(564) 00:16:56.872 fused_ordering(565) 00:16:56.872 fused_ordering(566) 00:16:56.872 fused_ordering(567) 00:16:56.872 fused_ordering(568) 00:16:56.872 fused_ordering(569) 00:16:56.872 fused_ordering(570) 00:16:56.872 fused_ordering(571) 00:16:56.872 fused_ordering(572) 00:16:56.872 fused_ordering(573) 00:16:56.872 fused_ordering(574) 00:16:56.872 fused_ordering(575) 00:16:56.872 fused_ordering(576) 00:16:56.872 fused_ordering(577) 00:16:56.872 fused_ordering(578) 00:16:56.872 fused_ordering(579) 00:16:56.872 fused_ordering(580) 00:16:56.872 fused_ordering(581) 00:16:56.872 fused_ordering(582) 00:16:56.872 fused_ordering(583) 00:16:56.872 fused_ordering(584) 00:16:56.872 fused_ordering(585) 00:16:56.872 fused_ordering(586) 00:16:56.872 fused_ordering(587) 00:16:56.872 fused_ordering(588) 00:16:56.873 fused_ordering(589) 00:16:56.873 fused_ordering(590) 00:16:56.873 fused_ordering(591) 00:16:56.873 fused_ordering(592) 00:16:56.873 fused_ordering(593) 00:16:56.873 fused_ordering(594) 00:16:56.873 fused_ordering(595) 00:16:56.873 fused_ordering(596) 00:16:56.873 fused_ordering(597) 00:16:56.873 fused_ordering(598) 00:16:56.873 fused_ordering(599) 00:16:56.873 fused_ordering(600) 00:16:56.873 fused_ordering(601) 00:16:56.873 fused_ordering(602) 00:16:56.873 fused_ordering(603) 00:16:56.873 fused_ordering(604) 00:16:56.873 fused_ordering(605) 00:16:56.873 fused_ordering(606) 00:16:56.873 fused_ordering(607) 00:16:56.873 fused_ordering(608) 00:16:56.873 fused_ordering(609) 00:16:56.873 fused_ordering(610) 00:16:56.873 fused_ordering(611) 00:16:56.873 fused_ordering(612) 00:16:56.873 fused_ordering(613) 00:16:56.873 fused_ordering(614) 00:16:56.873 fused_ordering(615) 00:16:57.445 fused_ordering(616) 00:16:57.445 fused_ordering(617) 00:16:57.445 fused_ordering(618) 00:16:57.445 fused_ordering(619) 00:16:57.445 fused_ordering(620) 00:16:57.445 fused_ordering(621) 00:16:57.445 fused_ordering(622) 00:16:57.445 fused_ordering(623) 00:16:57.445 fused_ordering(624) 00:16:57.445 fused_ordering(625) 00:16:57.445 fused_ordering(626) 00:16:57.445 fused_ordering(627) 00:16:57.445 fused_ordering(628) 00:16:57.445 fused_ordering(629) 00:16:57.445 fused_ordering(630) 00:16:57.445 fused_ordering(631) 00:16:57.445 fused_ordering(632) 00:16:57.445 fused_ordering(633) 00:16:57.445 fused_ordering(634) 00:16:57.445 fused_ordering(635) 00:16:57.445 fused_ordering(636) 00:16:57.445 fused_ordering(637) 00:16:57.445 fused_ordering(638) 00:16:57.445 fused_ordering(639) 00:16:57.445 fused_ordering(640) 00:16:57.445 fused_ordering(641) 00:16:57.445 fused_ordering(642) 00:16:57.445 fused_ordering(643) 00:16:57.445 fused_ordering(644) 00:16:57.445 fused_ordering(645) 00:16:57.445 fused_ordering(646) 00:16:57.445 fused_ordering(647) 00:16:57.445 fused_ordering(648) 00:16:57.445 fused_ordering(649) 00:16:57.445 fused_ordering(650) 00:16:57.446 fused_ordering(651) 00:16:57.446 fused_ordering(652) 00:16:57.446 fused_ordering(653) 00:16:57.446 fused_ordering(654) 00:16:57.446 fused_ordering(655) 00:16:57.446 fused_ordering(656) 00:16:57.446 fused_ordering(657) 00:16:57.446 fused_ordering(658) 00:16:57.446 fused_ordering(659) 00:16:57.446 fused_ordering(660) 00:16:57.446 fused_ordering(661) 00:16:57.446 fused_ordering(662) 00:16:57.446 fused_ordering(663) 00:16:57.446 fused_ordering(664) 00:16:57.446 fused_ordering(665) 00:16:57.446 fused_ordering(666) 00:16:57.446 fused_ordering(667) 00:16:57.446 fused_ordering(668) 00:16:57.446 fused_ordering(669) 00:16:57.446 fused_ordering(670) 00:16:57.446 fused_ordering(671) 00:16:57.446 fused_ordering(672) 00:16:57.446 fused_ordering(673) 00:16:57.446 fused_ordering(674) 00:16:57.446 fused_ordering(675) 00:16:57.446 fused_ordering(676) 00:16:57.446 fused_ordering(677) 00:16:57.446 fused_ordering(678) 00:16:57.446 fused_ordering(679) 00:16:57.446 fused_ordering(680) 00:16:57.446 fused_ordering(681) 00:16:57.446 fused_ordering(682) 00:16:57.446 fused_ordering(683) 00:16:57.446 fused_ordering(684) 00:16:57.446 fused_ordering(685) 00:16:57.446 fused_ordering(686) 00:16:57.446 fused_ordering(687) 00:16:57.446 fused_ordering(688) 00:16:57.446 fused_ordering(689) 00:16:57.446 fused_ordering(690) 00:16:57.446 fused_ordering(691) 00:16:57.446 fused_ordering(692) 00:16:57.446 fused_ordering(693) 00:16:57.446 fused_ordering(694) 00:16:57.446 fused_ordering(695) 00:16:57.446 fused_ordering(696) 00:16:57.446 fused_ordering(697) 00:16:57.446 fused_ordering(698) 00:16:57.446 fused_ordering(699) 00:16:57.446 fused_ordering(700) 00:16:57.446 fused_ordering(701) 00:16:57.446 fused_ordering(702) 00:16:57.446 fused_ordering(703) 00:16:57.446 fused_ordering(704) 00:16:57.446 fused_ordering(705) 00:16:57.446 fused_ordering(706) 00:16:57.446 fused_ordering(707) 00:16:57.446 fused_ordering(708) 00:16:57.446 fused_ordering(709) 00:16:57.446 fused_ordering(710) 00:16:57.446 fused_ordering(711) 00:16:57.446 fused_ordering(712) 00:16:57.446 fused_ordering(713) 00:16:57.446 fused_ordering(714) 00:16:57.446 fused_ordering(715) 00:16:57.446 fused_ordering(716) 00:16:57.446 fused_ordering(717) 00:16:57.446 fused_ordering(718) 00:16:57.446 fused_ordering(719) 00:16:57.446 fused_ordering(720) 00:16:57.446 fused_ordering(721) 00:16:57.446 fused_ordering(722) 00:16:57.446 fused_ordering(723) 00:16:57.446 fused_ordering(724) 00:16:57.446 fused_ordering(725) 00:16:57.446 fused_ordering(726) 00:16:57.446 fused_ordering(727) 00:16:57.446 fused_ordering(728) 00:16:57.446 fused_ordering(729) 00:16:57.446 fused_ordering(730) 00:16:57.446 fused_ordering(731) 00:16:57.446 fused_ordering(732) 00:16:57.446 fused_ordering(733) 00:16:57.446 fused_ordering(734) 00:16:57.446 fused_ordering(735) 00:16:57.446 fused_ordering(736) 00:16:57.446 fused_ordering(737) 00:16:57.446 fused_ordering(738) 00:16:57.446 fused_ordering(739) 00:16:57.446 fused_ordering(740) 00:16:57.446 fused_ordering(741) 00:16:57.446 fused_ordering(742) 00:16:57.446 fused_ordering(743) 00:16:57.446 fused_ordering(744) 00:16:57.446 fused_ordering(745) 00:16:57.446 fused_ordering(746) 00:16:57.446 fused_ordering(747) 00:16:57.446 fused_ordering(748) 00:16:57.446 fused_ordering(749) 00:16:57.446 fused_ordering(750) 00:16:57.446 fused_ordering(751) 00:16:57.446 fused_ordering(752) 00:16:57.446 fused_ordering(753) 00:16:57.446 fused_ordering(754) 00:16:57.446 fused_ordering(755) 00:16:57.446 fused_ordering(756) 00:16:57.446 fused_ordering(757) 00:16:57.446 fused_ordering(758) 00:16:57.446 fused_ordering(759) 00:16:57.446 fused_ordering(760) 00:16:57.446 fused_ordering(761) 00:16:57.446 fused_ordering(762) 00:16:57.446 fused_ordering(763) 00:16:57.446 fused_ordering(764) 00:16:57.446 fused_ordering(765) 00:16:57.446 fused_ordering(766) 00:16:57.446 fused_ordering(767) 00:16:57.446 fused_ordering(768) 00:16:57.446 fused_ordering(769) 00:16:57.446 fused_ordering(770) 00:16:57.446 fused_ordering(771) 00:16:57.446 fused_ordering(772) 00:16:57.446 fused_ordering(773) 00:16:57.446 fused_ordering(774) 00:16:57.446 fused_ordering(775) 00:16:57.446 fused_ordering(776) 00:16:57.446 fused_ordering(777) 00:16:57.446 fused_ordering(778) 00:16:57.446 fused_ordering(779) 00:16:57.446 fused_ordering(780) 00:16:57.446 fused_ordering(781) 00:16:57.446 fused_ordering(782) 00:16:57.446 fused_ordering(783) 00:16:57.446 fused_ordering(784) 00:16:57.446 fused_ordering(785) 00:16:57.446 fused_ordering(786) 00:16:57.446 fused_ordering(787) 00:16:57.446 fused_ordering(788) 00:16:57.446 fused_ordering(789) 00:16:57.446 fused_ordering(790) 00:16:57.446 fused_ordering(791) 00:16:57.446 fused_ordering(792) 00:16:57.446 fused_ordering(793) 00:16:57.446 fused_ordering(794) 00:16:57.446 fused_ordering(795) 00:16:57.446 fused_ordering(796) 00:16:57.446 fused_ordering(797) 00:16:57.446 fused_ordering(798) 00:16:57.446 fused_ordering(799) 00:16:57.446 fused_ordering(800) 00:16:57.446 fused_ordering(801) 00:16:57.446 fused_ordering(802) 00:16:57.446 fused_ordering(803) 00:16:57.447 fused_ordering(804) 00:16:57.447 fused_ordering(805) 00:16:57.447 fused_ordering(806) 00:16:57.447 fused_ordering(807) 00:16:57.447 fused_ordering(808) 00:16:57.447 fused_ordering(809) 00:16:57.447 fused_ordering(810) 00:16:57.447 fused_ordering(811) 00:16:57.447 fused_ordering(812) 00:16:57.447 fused_ordering(813) 00:16:57.447 fused_ordering(814) 00:16:57.447 fused_ordering(815) 00:16:57.447 fused_ordering(816) 00:16:57.447 fused_ordering(817) 00:16:57.447 fused_ordering(818) 00:16:57.447 fused_ordering(819) 00:16:57.447 fused_ordering(820) 00:16:58.018 fused_ordering(821) 00:16:58.018 fused_ordering(822) 00:16:58.018 fused_ordering(823) 00:16:58.018 fused_ordering(824) 00:16:58.018 fused_ordering(825) 00:16:58.018 fused_ordering(826) 00:16:58.018 fused_ordering(827) 00:16:58.018 fused_ordering(828) 00:16:58.018 fused_ordering(829) 00:16:58.018 fused_ordering(830) 00:16:58.018 fused_ordering(831) 00:16:58.018 fused_ordering(832) 00:16:58.018 fused_ordering(833) 00:16:58.018 fused_ordering(834) 00:16:58.018 fused_ordering(835) 00:16:58.018 fused_ordering(836) 00:16:58.018 fused_ordering(837) 00:16:58.018 fused_ordering(838) 00:16:58.018 fused_ordering(839) 00:16:58.018 fused_ordering(840) 00:16:58.018 fused_ordering(841) 00:16:58.018 fused_ordering(842) 00:16:58.018 fused_ordering(843) 00:16:58.018 fused_ordering(844) 00:16:58.018 fused_ordering(845) 00:16:58.018 fused_ordering(846) 00:16:58.018 fused_ordering(847) 00:16:58.018 fused_ordering(848) 00:16:58.018 fused_ordering(849) 00:16:58.018 fused_ordering(850) 00:16:58.018 fused_ordering(851) 00:16:58.019 fused_ordering(852) 00:16:58.019 fused_ordering(853) 00:16:58.019 fused_ordering(854) 00:16:58.019 fused_ordering(855) 00:16:58.019 fused_ordering(856) 00:16:58.019 fused_ordering(857) 00:16:58.019 fused_ordering(858) 00:16:58.019 fused_ordering(859) 00:16:58.019 fused_ordering(860) 00:16:58.019 fused_ordering(861) 00:16:58.019 fused_ordering(862) 00:16:58.019 fused_ordering(863) 00:16:58.019 fused_ordering(864) 00:16:58.019 fused_ordering(865) 00:16:58.019 fused_ordering(866) 00:16:58.019 fused_ordering(867) 00:16:58.019 fused_ordering(868) 00:16:58.019 fused_ordering(869) 00:16:58.019 fused_ordering(870) 00:16:58.019 fused_ordering(871) 00:16:58.019 fused_ordering(872) 00:16:58.019 fused_ordering(873) 00:16:58.019 fused_ordering(874) 00:16:58.019 fused_ordering(875) 00:16:58.019 fused_ordering(876) 00:16:58.019 fused_ordering(877) 00:16:58.019 fused_ordering(878) 00:16:58.019 fused_ordering(879) 00:16:58.019 fused_ordering(880) 00:16:58.019 fused_ordering(881) 00:16:58.019 fused_ordering(882) 00:16:58.019 fused_ordering(883) 00:16:58.019 fused_ordering(884) 00:16:58.019 fused_ordering(885) 00:16:58.019 fused_ordering(886) 00:16:58.019 fused_ordering(887) 00:16:58.019 fused_ordering(888) 00:16:58.019 fused_ordering(889) 00:16:58.019 fused_ordering(890) 00:16:58.019 fused_ordering(891) 00:16:58.019 fused_ordering(892) 00:16:58.019 fused_ordering(893) 00:16:58.019 fused_ordering(894) 00:16:58.019 fused_ordering(895) 00:16:58.019 fused_ordering(896) 00:16:58.019 fused_ordering(897) 00:16:58.019 fused_ordering(898) 00:16:58.019 fused_ordering(899) 00:16:58.019 fused_ordering(900) 00:16:58.019 fused_ordering(901) 00:16:58.019 fused_ordering(902) 00:16:58.019 fused_ordering(903) 00:16:58.019 fused_ordering(904) 00:16:58.019 fused_ordering(905) 00:16:58.019 fused_ordering(906) 00:16:58.019 fused_ordering(907) 00:16:58.019 fused_ordering(908) 00:16:58.019 fused_ordering(909) 00:16:58.019 fused_ordering(910) 00:16:58.019 fused_ordering(911) 00:16:58.019 fused_ordering(912) 00:16:58.019 fused_ordering(913) 00:16:58.019 fused_ordering(914) 00:16:58.019 fused_ordering(915) 00:16:58.019 fused_ordering(916) 00:16:58.019 fused_ordering(917) 00:16:58.019 fused_ordering(918) 00:16:58.019 fused_ordering(919) 00:16:58.019 fused_ordering(920) 00:16:58.019 fused_ordering(921) 00:16:58.019 fused_ordering(922) 00:16:58.019 fused_ordering(923) 00:16:58.019 fused_ordering(924) 00:16:58.019 fused_ordering(925) 00:16:58.019 fused_ordering(926) 00:16:58.019 fused_ordering(927) 00:16:58.019 fused_ordering(928) 00:16:58.019 fused_ordering(929) 00:16:58.019 fused_ordering(930) 00:16:58.019 fused_ordering(931) 00:16:58.019 fused_ordering(932) 00:16:58.019 fused_ordering(933) 00:16:58.019 fused_ordering(934) 00:16:58.019 fused_ordering(935) 00:16:58.019 fused_ordering(936) 00:16:58.019 fused_ordering(937) 00:16:58.019 fused_ordering(938) 00:16:58.019 fused_ordering(939) 00:16:58.019 fused_ordering(940) 00:16:58.019 fused_ordering(941) 00:16:58.019 fused_ordering(942) 00:16:58.019 fused_ordering(943) 00:16:58.019 fused_ordering(944) 00:16:58.019 fused_ordering(945) 00:16:58.019 fused_ordering(946) 00:16:58.019 fused_ordering(947) 00:16:58.019 fused_ordering(948) 00:16:58.019 fused_ordering(949) 00:16:58.019 fused_ordering(950) 00:16:58.019 fused_ordering(951) 00:16:58.019 fused_ordering(952) 00:16:58.019 fused_ordering(953) 00:16:58.019 fused_ordering(954) 00:16:58.019 fused_ordering(955) 00:16:58.019 fused_ordering(956) 00:16:58.019 fused_ordering(957) 00:16:58.019 fused_ordering(958) 00:16:58.019 fused_ordering(959) 00:16:58.019 fused_ordering(960) 00:16:58.019 fused_ordering(961) 00:16:58.019 fused_ordering(962) 00:16:58.019 fused_ordering(963) 00:16:58.019 fused_ordering(964) 00:16:58.019 fused_ordering(965) 00:16:58.019 fused_ordering(966) 00:16:58.019 fused_ordering(967) 00:16:58.019 fused_ordering(968) 00:16:58.019 fused_ordering(969) 00:16:58.019 fused_ordering(970) 00:16:58.019 fused_ordering(971) 00:16:58.019 fused_ordering(972) 00:16:58.019 fused_ordering(973) 00:16:58.019 fused_ordering(974) 00:16:58.019 fused_ordering(975) 00:16:58.019 fused_ordering(976) 00:16:58.019 fused_ordering(977) 00:16:58.019 fused_ordering(978) 00:16:58.019 fused_ordering(979) 00:16:58.019 fused_ordering(980) 00:16:58.019 fused_ordering(981) 00:16:58.019 fused_ordering(982) 00:16:58.019 fused_ordering(983) 00:16:58.019 fused_ordering(984) 00:16:58.019 fused_ordering(985) 00:16:58.019 fused_ordering(986) 00:16:58.019 fused_ordering(987) 00:16:58.019 fused_ordering(988) 00:16:58.019 fused_ordering(989) 00:16:58.019 fused_ordering(990) 00:16:58.019 fused_ordering(991) 00:16:58.019 fused_ordering(992) 00:16:58.019 fused_ordering(993) 00:16:58.019 fused_ordering(994) 00:16:58.019 fused_ordering(995) 00:16:58.019 fused_ordering(996) 00:16:58.019 fused_ordering(997) 00:16:58.019 fused_ordering(998) 00:16:58.019 fused_ordering(999) 00:16:58.019 fused_ordering(1000) 00:16:58.019 fused_ordering(1001) 00:16:58.019 fused_ordering(1002) 00:16:58.019 fused_ordering(1003) 00:16:58.019 fused_ordering(1004) 00:16:58.019 fused_ordering(1005) 00:16:58.019 fused_ordering(1006) 00:16:58.019 fused_ordering(1007) 00:16:58.019 fused_ordering(1008) 00:16:58.019 fused_ordering(1009) 00:16:58.019 fused_ordering(1010) 00:16:58.019 fused_ordering(1011) 00:16:58.019 fused_ordering(1012) 00:16:58.019 fused_ordering(1013) 00:16:58.019 fused_ordering(1014) 00:16:58.019 fused_ordering(1015) 00:16:58.019 fused_ordering(1016) 00:16:58.020 fused_ordering(1017) 00:16:58.020 fused_ordering(1018) 00:16:58.020 fused_ordering(1019) 00:16:58.020 fused_ordering(1020) 00:16:58.020 fused_ordering(1021) 00:16:58.020 fused_ordering(1022) 00:16:58.020 fused_ordering(1023) 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.020 rmmod nvme_tcp 00:16:58.020 rmmod nvme_fabrics 00:16:58.020 rmmod nvme_keyring 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 392226 ']' 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 392226 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 392226 ']' 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 392226 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:58.020 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 392226 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 392226' 00:16:58.281 killing process with pid 392226 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 392226 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 392226 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.281 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.824 00:17:00.824 real 0m14.673s 00:17:00.824 user 0m7.742s 00:17:00.824 sys 0m7.997s 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:00.824 ************************************ 00:17:00.824 END TEST nvmf_fused_ordering 00:17:00.824 ************************************ 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.824 ************************************ 00:17:00.824 START TEST nvmf_ns_masking 00:17:00.824 ************************************ 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:00.824 * Looking for test storage... 00:17:00.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.824 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fee30d49-6d43-4639-9e19-0e04cdf4f662 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=dbf763a9-76d7-43d7-a11a-c1a2bf379258 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=cbb2f997-e769-4dfb-bef6-6d4ec4ac9dbe 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:17:00.825 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:09.063 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:09.063 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:09.063 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.063 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:09.064 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:09.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:17:09.064 00:17:09.064 --- 10.0.0.2 ping statistics --- 00:17:09.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.064 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:17:09.064 00:17:09.064 --- 10.0.0.1 ping statistics --- 00:17:09.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.064 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=397349 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 397349 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 397349 ']' 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.064 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:09.064 [2024-07-25 12:29:42.475682] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:17:09.064 [2024-07-25 12:29:42.475744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.324 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.324 [2024-07-25 12:29:42.566267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.324 [2024-07-25 12:29:42.657623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.324 [2024-07-25 12:29:42.657680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.324 [2024-07-25 12:29:42.657688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.324 [2024-07-25 12:29:42.657695] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.324 [2024-07-25 12:29:42.657701] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.324 [2024-07-25 12:29:42.657732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.264 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.264 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:10.264 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.264 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:10.264 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:10.264 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.264 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:10.264 [2024-07-25 12:29:43.572174] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.264 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:10.264 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:10.264 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:10.524 Malloc1 00:17:10.524 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:10.784 Malloc2 00:17:10.784 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:11.045 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:11.304 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.304 [2024-07-25 12:29:44.651395] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.304 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:11.304 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cbb2f997-e769-4dfb-bef6-6d4ec4ac9dbe -a 10.0.0.2 -s 4420 -i 4 00:17:11.565 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:11.565 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:11.565 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.565 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:11.565 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:13.476 [ 0]:0x1 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:13.476 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:13.736 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a60c92135974b1bbc10d0fed130f0ab 00:17:13.736 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a60c92135974b1bbc10d0fed130f0ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.736 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:13.736 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:13.736 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:13.736 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:13.736 [ 0]:0x1 00:17:13.736 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:13.736 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:13.736 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a60c92135974b1bbc10d0fed130f0ab 00:17:13.736 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a60c92135974b1bbc10d0fed130f0ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.736 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:13.736 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:13.995 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:13.995 [ 1]:0x2 00:17:13.995 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:13.995 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:13.995 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e0df22a701e47a7a769f123b05f4b5e 00:17:13.995 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e0df22a701e47a7a769f123b05f4b5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.995 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:13.995 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.256 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:14.256 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:14.516 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:14.516 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cbb2f997-e769-4dfb-bef6-6d4ec4ac9dbe -a 10.0.0.2 -s 4420 -i 4 00:17:14.777 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:14.777 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:14.777 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.777 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:14.777 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:14.777 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:16.688 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:16.688 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:16.688 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.688 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:16.688 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.688 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:16.688 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:16.689 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:16.689 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:16.949 [ 0]:0x2 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e0df22a701e47a7a769f123b05f4b5e 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e0df22a701e47a7a769f123b05f4b5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.949 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:17.209 [ 0]:0x1 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a60c92135974b1bbc10d0fed130f0ab 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a60c92135974b1bbc10d0fed130f0ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:17.209 [ 1]:0x2 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e0df22a701e47a7a769f123b05f4b5e 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e0df22a701e47a7a769f123b05f4b5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:17.209 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:17.470 [ 0]:0x2 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e0df22a701e47a7a769f123b05f4b5e 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e0df22a701e47a7a769f123b05f4b5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:17.470 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.731 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:17.731 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:17.731 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cbb2f997-e769-4dfb-bef6-6d4ec4ac9dbe -a 10.0.0.2 -s 4420 -i 4 00:17:17.992 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:17.992 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:17.992 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.992 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:17.992 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:17.992 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:19.903 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:19.903 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:19.903 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.903 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:19.903 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.903 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:19.903 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:19.903 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:20.162 [ 0]:0x1 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9a60c92135974b1bbc10d0fed130f0ab 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9a60c92135974b1bbc10d0fed130f0ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:20.162 [ 1]:0x2 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:20.162 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e0df22a701e47a7a769f123b05f4b5e 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e0df22a701e47a7a769f123b05f4b5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:20.422 [ 0]:0x2 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:20.422 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e0df22a701e47a7a769f123b05f4b5e 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e0df22a701e47a7a769f123b05f4b5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:20.683 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:20.683 [2024-07-25 12:29:54.065894] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:20.683 request: 00:17:20.683 { 00:17:20.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.683 "nsid": 2, 00:17:20.683 "host": "nqn.2016-06.io.spdk:host1", 00:17:20.683 "method": "nvmf_ns_remove_host", 00:17:20.683 "req_id": 1 00:17:20.683 } 00:17:20.683 Got JSON-RPC error response 00:17:20.683 response: 00:17:20.683 { 00:17:20.683 "code": -32602, 00:17:20.683 "message": "Invalid parameters" 00:17:20.683 } 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.683 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:20.944 [ 0]:0x2 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e0df22a701e47a7a769f123b05f4b5e 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e0df22a701e47a7a769f123b05f4b5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=399344 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 399344 /var/tmp/host.sock 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 399344 ']' 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:20.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.944 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:20.944 [2024-07-25 12:29:54.285655] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:17:20.944 [2024-07-25 12:29:54.285702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399344 ] 00:17:20.944 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.944 [2024-07-25 12:29:54.362340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.205 [2024-07-25 12:29:54.439992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.776 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.776 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:21.776 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:22.037 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:22.297 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fee30d49-6d43-4639-9e19-0e04cdf4f662 00:17:22.297 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:22.297 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FEE30D496D4346399E190E04CDF4F662 -i 00:17:22.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid dbf763a9-76d7-43d7-a11a-c1a2bf379258 00:17:22.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:22.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DBF763A976D743D7A11AC1A2BF379258 -i 00:17:22.818 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:22.818 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:23.079 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:23.079 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:23.340 nvme0n1 00:17:23.340 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:23.340 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:24.283 nvme1n2 00:17:24.283 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:24.283 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:24.283 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:24.283 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:24.283 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:24.283 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:24.283 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:24.283 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:24.283 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:24.543 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fee30d49-6d43-4639-9e19-0e04cdf4f662 == \f\e\e\3\0\d\4\9\-\6\d\4\3\-\4\6\3\9\-\9\e\1\9\-\0\e\0\4\c\d\f\4\f\6\6\2 ]] 00:17:24.543 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:24.543 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:24.543 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:24.803 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ dbf763a9-76d7-43d7-a11a-c1a2bf379258 == \d\b\f\7\6\3\a\9\-\7\6\d\7\-\4\3\d\7\-\a\1\1\a\-\c\1\a\2\b\f\3\7\9\2\5\8 ]] 00:17:24.803 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 399344 00:17:24.803 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 399344 ']' 00:17:24.803 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 399344 00:17:24.803 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:24.803 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.803 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 399344 00:17:24.803 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:24.803 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:24.803 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 399344' 00:17:24.803 killing process with pid 399344 00:17:24.804 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 399344 00:17:24.804 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 399344 00:17:25.064 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.324 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:25.324 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:25.324 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:25.324 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:17:25.324 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:25.324 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:17:25.324 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:25.324 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:25.324 rmmod nvme_tcp 00:17:25.324 rmmod nvme_fabrics 00:17:25.325 rmmod nvme_keyring 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 397349 ']' 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 397349 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 397349 ']' 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 397349 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 397349 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 397349' 00:17:25.325 killing process with pid 397349 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 397349 00:17:25.325 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 397349 00:17:25.586 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.586 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:25.586 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:25.586 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.586 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:25.586 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.586 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.586 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.132 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.132 00:17:28.132 real 0m27.124s 00:17:28.132 user 0m28.464s 00:17:28.132 sys 0m8.461s 00:17:28.132 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:28.132 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:28.132 ************************************ 00:17:28.132 END TEST nvmf_ns_masking 00:17:28.132 ************************************ 00:17:28.132 12:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:28.132 12:30:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:28.132 12:30:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:28.132 12:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:28.132 12:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.132 12:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.132 ************************************ 00:17:28.132 START TEST nvmf_nvme_cli 00:17:28.132 ************************************ 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:28.132 * Looking for test storage... 00:17:28.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.133 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:36.275 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:36.275 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:36.275 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:36.275 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.275 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:36.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:17:36.275 00:17:36.275 --- 10.0.0.2 ping statistics --- 00:17:36.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.276 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:17:36.276 00:17:36.276 --- 10.0.0.1 ping statistics --- 00:17:36.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.276 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=404786 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 404786 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 404786 ']' 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.276 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:36.276 [2024-07-25 12:30:09.674721] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:17:36.276 [2024-07-25 12:30:09.674783] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.537 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.537 [2024-07-25 12:30:09.770326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.537 [2024-07-25 12:30:09.866635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.537 [2024-07-25 12:30:09.866694] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.537 [2024-07-25 12:30:09.866702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.537 [2024-07-25 12:30:09.866709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.537 [2024-07-25 12:30:09.866714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.537 [2024-07-25 12:30:09.866866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.537 [2024-07-25 12:30:09.867020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.537 [2024-07-25 12:30:09.867173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.537 [2024-07-25 12:30:09.867173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.108 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.108 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:17:37.108 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.108 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.108 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 [2024-07-25 12:30:10.570486] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 Malloc0 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 Malloc1 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 [2024-07-25 12:30:10.675525] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 4420 00:17:37.368 00:17:37.368 Discovery Log Number of Records 2, Generation counter 2 00:17:37.368 =====Discovery Log Entry 0====== 00:17:37.368 trtype: tcp 00:17:37.368 adrfam: ipv4 00:17:37.368 subtype: current discovery subsystem 00:17:37.368 treq: not required 00:17:37.368 portid: 0 00:17:37.368 trsvcid: 4420 00:17:37.368 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:37.368 traddr: 10.0.0.2 00:17:37.368 eflags: explicit discovery connections, duplicate discovery information 00:17:37.368 sectype: none 00:17:37.368 =====Discovery Log Entry 1====== 00:17:37.368 trtype: tcp 00:17:37.368 adrfam: ipv4 00:17:37.368 subtype: nvme subsystem 00:17:37.368 treq: not required 00:17:37.368 portid: 0 00:17:37.368 trsvcid: 4420 00:17:37.368 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:37.368 traddr: 10.0.0.2 00:17:37.368 eflags: none 00:17:37.368 sectype: none 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:37.368 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:37.369 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:37.369 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:37.369 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:37.369 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:37.369 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:39.281 12:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:39.281 12:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:39.281 12:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:39.281 12:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:39.281 12:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:39.281 12:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:41.199 /dev/nvme0n1 ]] 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:41.199 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:41.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.484 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.745 rmmod nvme_tcp 00:17:41.745 rmmod nvme_fabrics 00:17:41.745 rmmod nvme_keyring 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 404786 ']' 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 404786 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 404786 ']' 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 404786 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 404786 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 404786' 00:17:41.745 killing process with pid 404786 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 404786 00:17:41.745 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 404786 00:17:42.004 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.004 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.004 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.004 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.004 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.004 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.005 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.005 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.918 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:43.918 00:17:43.918 real 0m16.222s 00:17:43.918 user 0m23.433s 00:17:43.918 sys 0m6.945s 00:17:43.918 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:43.918 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:43.918 ************************************ 00:17:43.918 END TEST nvmf_nvme_cli 00:17:43.918 ************************************ 00:17:43.918 12:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:43.918 12:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:43.918 12:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:43.918 12:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:43.918 12:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.918 12:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.918 ************************************ 00:17:43.918 START TEST nvmf_vfio_user 00:17:43.918 ************************************ 00:17:43.918 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:44.179 * Looking for test storage... 00:17:44.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.179 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=406430 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 406430' 00:17:44.180 Process pid: 406430 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 406430 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 406430 ']' 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.180 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:44.180 [2024-07-25 12:30:17.511572] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:17:44.180 [2024-07-25 12:30:17.511637] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.180 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.180 [2024-07-25 12:30:17.581750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.441 [2024-07-25 12:30:17.649089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.441 [2024-07-25 12:30:17.649126] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.441 [2024-07-25 12:30:17.649134] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.441 [2024-07-25 12:30:17.649140] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.441 [2024-07-25 12:30:17.649145] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.441 [2024-07-25 12:30:17.649255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.441 [2024-07-25 12:30:17.649293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.441 [2024-07-25 12:30:17.649326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.441 [2024-07-25 12:30:17.649328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.441 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.441 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:17:44.441 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:45.821 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:45.821 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:45.821 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:45.821 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:45.821 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:45.821 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:45.821 Malloc1 00:17:45.821 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:46.081 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:46.341 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:46.601 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:46.601 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:46.601 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:46.862 Malloc2 00:17:46.862 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:47.122 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:47.122 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:47.382 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:47.382 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:47.382 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:47.382 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:47.382 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:47.382 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:47.382 [2024-07-25 12:30:20.739424] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:17:47.383 [2024-07-25 12:30:20.739493] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407030 ] 00:17:47.383 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.383 [2024-07-25 12:30:20.771318] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:47.383 [2024-07-25 12:30:20.779940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.383 [2024-07-25 12:30:20.779960] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2f86f48000 00:17:47.383 [2024-07-25 12:30:20.780940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.383 [2024-07-25 12:30:20.781940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.383 [2024-07-25 12:30:20.782944] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.383 [2024-07-25 12:30:20.783954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.383 [2024-07-25 12:30:20.784955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.383 [2024-07-25 12:30:20.785962] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.383 [2024-07-25 12:30:20.786970] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.383 [2024-07-25 12:30:20.787975] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.383 [2024-07-25 12:30:20.788981] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.383 [2024-07-25 12:30:20.788989] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2f86f3d000 00:17:47.383 [2024-07-25 12:30:20.790214] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.644 [2024-07-25 12:30:20.810617] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:47.644 [2024-07-25 12:30:20.810641] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:47.645 [2024-07-25 12:30:20.813164] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:47.645 [2024-07-25 12:30:20.813208] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:47.645 [2024-07-25 12:30:20.813307] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:47.645 [2024-07-25 12:30:20.813322] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:47.645 [2024-07-25 12:30:20.813328] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:47.645 [2024-07-25 12:30:20.814161] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:47.645 [2024-07-25 12:30:20.814171] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:47.645 [2024-07-25 12:30:20.814178] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:47.645 [2024-07-25 12:30:20.815168] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:47.645 [2024-07-25 12:30:20.815176] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:47.645 [2024-07-25 12:30:20.815183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:47.645 [2024-07-25 12:30:20.816176] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:47.645 [2024-07-25 12:30:20.816185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:47.645 [2024-07-25 12:30:20.817178] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:47.645 [2024-07-25 12:30:20.817186] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:47.645 [2024-07-25 12:30:20.817190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:47.645 [2024-07-25 12:30:20.817196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:47.645 [2024-07-25 12:30:20.817302] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:47.645 [2024-07-25 12:30:20.817306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:47.645 [2024-07-25 12:30:20.817311] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:47.645 [2024-07-25 12:30:20.818189] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:47.645 [2024-07-25 12:30:20.819198] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:47.645 [2024-07-25 12:30:20.820204] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:47.645 [2024-07-25 12:30:20.821200] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:47.645 [2024-07-25 12:30:20.821278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:47.645 [2024-07-25 12:30:20.822213] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:47.645 [2024-07-25 12:30:20.822220] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:47.645 [2024-07-25 12:30:20.822224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822244] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:47.645 [2024-07-25 12:30:20.822251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822265] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.645 [2024-07-25 12:30:20.822270] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.645 [2024-07-25 12:30:20.822275] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.645 [2024-07-25 12:30:20.822288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.645 [2024-07-25 12:30:20.822345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:47.645 [2024-07-25 12:30:20.822354] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:47.645 [2024-07-25 12:30:20.822359] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:47.645 [2024-07-25 12:30:20.822363] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:47.645 [2024-07-25 12:30:20.822367] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:47.645 [2024-07-25 12:30:20.822371] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:47.645 [2024-07-25 12:30:20.822376] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:47.645 [2024-07-25 12:30:20.822380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:47.645 [2024-07-25 12:30:20.822417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:47.645 [2024-07-25 12:30:20.822429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.645 [2024-07-25 12:30:20.822437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.645 [2024-07-25 12:30:20.822445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.645 [2024-07-25 12:30:20.822453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.645 [2024-07-25 12:30:20.822458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:47.645 [2024-07-25 12:30:20.822488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:47.645 [2024-07-25 12:30:20.822494] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:47.645 [2024-07-25 12:30:20.822498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.645 [2024-07-25 12:30:20.822543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:47.645 [2024-07-25 12:30:20.822603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822611] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822618] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:47.645 [2024-07-25 12:30:20.822622] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:47.645 [2024-07-25 12:30:20.822625] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.645 [2024-07-25 12:30:20.822631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:47.645 [2024-07-25 12:30:20.822649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:47.645 [2024-07-25 12:30:20.822659] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:47.645 [2024-07-25 12:30:20.822671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:47.645 [2024-07-25 12:30:20.822684] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.645 [2024-07-25 12:30:20.822688] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.645 [2024-07-25 12:30:20.822691] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.645 [2024-07-25 12:30:20.822697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.645 [2024-07-25 12:30:20.822725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:47.645 [2024-07-25 12:30:20.822737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:47.646 [2024-07-25 12:30:20.822744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:47.646 [2024-07-25 12:30:20.822751] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.646 [2024-07-25 12:30:20.822754] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.646 [2024-07-25 12:30:20.822757] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.646 [2024-07-25 12:30:20.822763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.646 [2024-07-25 12:30:20.822781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:47.646 [2024-07-25 12:30:20.822789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:47.646 [2024-07-25 12:30:20.822795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:47.646 [2024-07-25 12:30:20.822802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:47.646 [2024-07-25 12:30:20.822811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:47.646 [2024-07-25 12:30:20.822816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:47.646 [2024-07-25 12:30:20.822821] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:47.646 [2024-07-25 12:30:20.822826] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:47.646 [2024-07-25 12:30:20.822830] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:47.646 [2024-07-25 12:30:20.822834] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:47.646 [2024-07-25 12:30:20.822851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:47.646 [2024-07-25 12:30:20.822865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:47.646 [2024-07-25 12:30:20.822875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:47.646 [2024-07-25 12:30:20.822890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:47.646 [2024-07-25 12:30:20.822900] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:47.646 [2024-07-25 12:30:20.822920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:47.646 [2024-07-25 12:30:20.822930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.646 [2024-07-25 12:30:20.822945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:47.646 [2024-07-25 12:30:20.822958] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:47.646 [2024-07-25 12:30:20.822962] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:47.646 [2024-07-25 12:30:20.822965] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:47.646 [2024-07-25 12:30:20.822969] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:47.646 [2024-07-25 12:30:20.822972] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:47.646 [2024-07-25 12:30:20.822977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:47.646 [2024-07-25 12:30:20.822984] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:47.646 [2024-07-25 12:30:20.822988] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:47.646 [2024-07-25 12:30:20.822992] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.646 [2024-07-25 12:30:20.822997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:47.646 [2024-07-25 12:30:20.823004] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:47.646 [2024-07-25 12:30:20.823008] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.646 [2024-07-25 12:30:20.823011] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.646 [2024-07-25 12:30:20.823017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.646 [2024-07-25 12:30:20.823025] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:47.646 [2024-07-25 12:30:20.823029] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:47.646 [2024-07-25 12:30:20.823032] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.646 [2024-07-25 12:30:20.823037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:47.646 [2024-07-25 12:30:20.823044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:47.646 [2024-07-25 12:30:20.823054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:47.646 [2024-07-25 12:30:20.823066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:47.646 [2024-07-25 12:30:20.823072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:47.646 ===================================================== 00:17:47.646 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:47.646 ===================================================== 00:17:47.646 Controller Capabilities/Features 00:17:47.646 ================================ 00:17:47.646 Vendor ID: 4e58 00:17:47.646 Subsystem Vendor ID: 4e58 00:17:47.646 Serial Number: SPDK1 00:17:47.646 Model Number: SPDK bdev Controller 00:17:47.646 Firmware Version: 24.09 00:17:47.646 Recommended Arb Burst: 6 00:17:47.646 IEEE OUI Identifier: 8d 6b 50 00:17:47.646 Multi-path I/O 00:17:47.646 May have multiple subsystem ports: Yes 00:17:47.646 May have multiple controllers: Yes 00:17:47.646 Associated with SR-IOV VF: No 00:17:47.646 Max Data Transfer Size: 131072 00:17:47.646 Max Number of Namespaces: 32 00:17:47.646 Max Number of I/O Queues: 127 00:17:47.646 NVMe Specification Version (VS): 1.3 00:17:47.646 NVMe Specification Version (Identify): 1.3 00:17:47.646 Maximum Queue Entries: 256 00:17:47.646 Contiguous Queues Required: Yes 00:17:47.646 Arbitration Mechanisms Supported 00:17:47.646 Weighted Round Robin: Not Supported 00:17:47.646 Vendor Specific: Not Supported 00:17:47.646 Reset Timeout: 15000 ms 00:17:47.646 Doorbell Stride: 4 bytes 00:17:47.646 NVM Subsystem Reset: Not Supported 00:17:47.646 Command Sets Supported 00:17:47.646 NVM Command Set: Supported 00:17:47.646 Boot Partition: Not Supported 00:17:47.646 Memory Page Size Minimum: 4096 bytes 00:17:47.646 Memory Page Size Maximum: 4096 bytes 00:17:47.646 Persistent Memory Region: Not Supported 00:17:47.646 Optional Asynchronous Events Supported 00:17:47.646 Namespace Attribute Notices: Supported 00:17:47.646 Firmware Activation Notices: Not Supported 00:17:47.646 ANA Change Notices: Not Supported 00:17:47.646 PLE Aggregate Log Change Notices: Not Supported 00:17:47.646 LBA Status Info Alert Notices: Not Supported 00:17:47.646 EGE Aggregate Log Change Notices: Not Supported 00:17:47.646 Normal NVM Subsystem Shutdown event: Not Supported 00:17:47.646 Zone Descriptor Change Notices: Not Supported 00:17:47.646 Discovery Log Change Notices: Not Supported 00:17:47.646 Controller Attributes 00:17:47.646 128-bit Host Identifier: Supported 00:17:47.646 Non-Operational Permissive Mode: Not Supported 00:17:47.646 NVM Sets: Not Supported 00:17:47.646 Read Recovery Levels: Not Supported 00:17:47.646 Endurance Groups: Not Supported 00:17:47.646 Predictable Latency Mode: Not Supported 00:17:47.646 Traffic Based Keep ALive: Not Supported 00:17:47.646 Namespace Granularity: Not Supported 00:17:47.646 SQ Associations: Not Supported 00:17:47.646 UUID List: Not Supported 00:17:47.646 Multi-Domain Subsystem: Not Supported 00:17:47.646 Fixed Capacity Management: Not Supported 00:17:47.646 Variable Capacity Management: Not Supported 00:17:47.646 Delete Endurance Group: Not Supported 00:17:47.646 Delete NVM Set: Not Supported 00:17:47.646 Extended LBA Formats Supported: Not Supported 00:17:47.646 Flexible Data Placement Supported: Not Supported 00:17:47.646 00:17:47.646 Controller Memory Buffer Support 00:17:47.646 ================================ 00:17:47.646 Supported: No 00:17:47.646 00:17:47.646 Persistent Memory Region Support 00:17:47.646 ================================ 00:17:47.646 Supported: No 00:17:47.646 00:17:47.646 Admin Command Set Attributes 00:17:47.646 ============================ 00:17:47.646 Security Send/Receive: Not Supported 00:17:47.646 Format NVM: Not Supported 00:17:47.646 Firmware Activate/Download: Not Supported 00:17:47.647 Namespace Management: Not Supported 00:17:47.647 Device Self-Test: Not Supported 00:17:47.647 Directives: Not Supported 00:17:47.647 NVMe-MI: Not Supported 00:17:47.647 Virtualization Management: Not Supported 00:17:47.647 Doorbell Buffer Config: Not Supported 00:17:47.647 Get LBA Status Capability: Not Supported 00:17:47.647 Command & Feature Lockdown Capability: Not Supported 00:17:47.647 Abort Command Limit: 4 00:17:47.647 Async Event Request Limit: 4 00:17:47.647 Number of Firmware Slots: N/A 00:17:47.647 Firmware Slot 1 Read-Only: N/A 00:17:47.647 Firmware Activation Without Reset: N/A 00:17:47.647 Multiple Update Detection Support: N/A 00:17:47.647 Firmware Update Granularity: No Information Provided 00:17:47.647 Per-Namespace SMART Log: No 00:17:47.647 Asymmetric Namespace Access Log Page: Not Supported 00:17:47.647 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:47.647 Command Effects Log Page: Supported 00:17:47.647 Get Log Page Extended Data: Supported 00:17:47.647 Telemetry Log Pages: Not Supported 00:17:47.647 Persistent Event Log Pages: Not Supported 00:17:47.647 Supported Log Pages Log Page: May Support 00:17:47.647 Commands Supported & Effects Log Page: Not Supported 00:17:47.647 Feature Identifiers & Effects Log Page:May Support 00:17:47.647 NVMe-MI Commands & Effects Log Page: May Support 00:17:47.647 Data Area 4 for Telemetry Log: Not Supported 00:17:47.647 Error Log Page Entries Supported: 128 00:17:47.647 Keep Alive: Supported 00:17:47.647 Keep Alive Granularity: 10000 ms 00:17:47.647 00:17:47.647 NVM Command Set Attributes 00:17:47.647 ========================== 00:17:47.647 Submission Queue Entry Size 00:17:47.647 Max: 64 00:17:47.647 Min: 64 00:17:47.647 Completion Queue Entry Size 00:17:47.647 Max: 16 00:17:47.647 Min: 16 00:17:47.647 Number of Namespaces: 32 00:17:47.647 Compare Command: Supported 00:17:47.647 Write Uncorrectable Command: Not Supported 00:17:47.647 Dataset Management Command: Supported 00:17:47.647 Write Zeroes Command: Supported 00:17:47.647 Set Features Save Field: Not Supported 00:17:47.647 Reservations: Not Supported 00:17:47.647 Timestamp: Not Supported 00:17:47.647 Copy: Supported 00:17:47.647 Volatile Write Cache: Present 00:17:47.647 Atomic Write Unit (Normal): 1 00:17:47.647 Atomic Write Unit (PFail): 1 00:17:47.647 Atomic Compare & Write Unit: 1 00:17:47.647 Fused Compare & Write: Supported 00:17:47.647 Scatter-Gather List 00:17:47.647 SGL Command Set: Supported (Dword aligned) 00:17:47.647 SGL Keyed: Not Supported 00:17:47.647 SGL Bit Bucket Descriptor: Not Supported 00:17:47.647 SGL Metadata Pointer: Not Supported 00:17:47.647 Oversized SGL: Not Supported 00:17:47.647 SGL Metadata Address: Not Supported 00:17:47.647 SGL Offset: Not Supported 00:17:47.647 Transport SGL Data Block: Not Supported 00:17:47.647 Replay Protected Memory Block: Not Supported 00:17:47.647 00:17:47.647 Firmware Slot Information 00:17:47.647 ========================= 00:17:47.647 Active slot: 1 00:17:47.647 Slot 1 Firmware Revision: 24.09 00:17:47.647 00:17:47.647 00:17:47.647 Commands Supported and Effects 00:17:47.647 ============================== 00:17:47.647 Admin Commands 00:17:47.647 -------------- 00:17:47.647 Get Log Page (02h): Supported 00:17:47.647 Identify (06h): Supported 00:17:47.647 Abort (08h): Supported 00:17:47.647 Set Features (09h): Supported 00:17:47.647 Get Features (0Ah): Supported 00:17:47.647 Asynchronous Event Request (0Ch): Supported 00:17:47.647 Keep Alive (18h): Supported 00:17:47.647 I/O Commands 00:17:47.647 ------------ 00:17:47.647 Flush (00h): Supported LBA-Change 00:17:47.647 Write (01h): Supported LBA-Change 00:17:47.647 Read (02h): Supported 00:17:47.647 Compare (05h): Supported 00:17:47.647 Write Zeroes (08h): Supported LBA-Change 00:17:47.647 Dataset Management (09h): Supported LBA-Change 00:17:47.647 Copy (19h): Supported LBA-Change 00:17:47.647 00:17:47.647 Error Log 00:17:47.647 ========= 00:17:47.647 00:17:47.647 Arbitration 00:17:47.647 =========== 00:17:47.647 Arbitration Burst: 1 00:17:47.647 00:17:47.647 Power Management 00:17:47.647 ================ 00:17:47.647 Number of Power States: 1 00:17:47.647 Current Power State: Power State #0 00:17:47.647 Power State #0: 00:17:47.647 Max Power: 0.00 W 00:17:47.647 Non-Operational State: Operational 00:17:47.647 Entry Latency: Not Reported 00:17:47.647 Exit Latency: Not Reported 00:17:47.647 Relative Read Throughput: 0 00:17:47.647 Relative Read Latency: 0 00:17:47.647 Relative Write Throughput: 0 00:17:47.647 Relative Write Latency: 0 00:17:47.647 Idle Power: Not Reported 00:17:47.647 Active Power: Not Reported 00:17:47.647 Non-Operational Permissive Mode: Not Supported 00:17:47.647 00:17:47.647 Health Information 00:17:47.647 ================== 00:17:47.647 Critical Warnings: 00:17:47.647 Available Spare Space: OK 00:17:47.647 Temperature: OK 00:17:47.647 Device Reliability: OK 00:17:47.647 Read Only: No 00:17:47.647 Volatile Memory Backup: OK 00:17:47.647 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:47.647 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:47.647 Available Spare: 0% 00:17:47.647 Available Sp[2024-07-25 12:30:20.823165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:47.647 [2024-07-25 12:30:20.823177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:47.647 [2024-07-25 12:30:20.823202] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:47.647 [2024-07-25 12:30:20.823211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.647 [2024-07-25 12:30:20.823217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.647 [2024-07-25 12:30:20.823223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.647 [2024-07-25 12:30:20.823228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.647 [2024-07-25 12:30:20.824236] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:47.647 [2024-07-25 12:30:20.824247] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:47.647 [2024-07-25 12:30:20.825240] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:47.647 [2024-07-25 12:30:20.825305] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:47.647 [2024-07-25 12:30:20.825311] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:47.647 [2024-07-25 12:30:20.826250] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:47.647 [2024-07-25 12:30:20.826259] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:47.647 [2024-07-25 12:30:20.826313] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:47.647 [2024-07-25 12:30:20.830555] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.647 are Threshold: 0% 00:17:47.647 Life Percentage Used: 0% 00:17:47.647 Data Units Read: 0 00:17:47.647 Data Units Written: 0 00:17:47.647 Host Read Commands: 0 00:17:47.647 Host Write Commands: 0 00:17:47.647 Controller Busy Time: 0 minutes 00:17:47.647 Power Cycles: 0 00:17:47.647 Power On Hours: 0 hours 00:17:47.647 Unsafe Shutdowns: 0 00:17:47.647 Unrecoverable Media Errors: 0 00:17:47.647 Lifetime Error Log Entries: 0 00:17:47.647 Warning Temperature Time: 0 minutes 00:17:47.647 Critical Temperature Time: 0 minutes 00:17:47.647 00:17:47.647 Number of Queues 00:17:47.647 ================ 00:17:47.647 Number of I/O Submission Queues: 127 00:17:47.647 Number of I/O Completion Queues: 127 00:17:47.647 00:17:47.647 Active Namespaces 00:17:47.647 ================= 00:17:47.647 Namespace ID:1 00:17:47.647 Error Recovery Timeout: Unlimited 00:17:47.647 Command Set Identifier: NVM (00h) 00:17:47.647 Deallocate: Supported 00:17:47.647 Deallocated/Unwritten Error: Not Supported 00:17:47.647 Deallocated Read Value: Unknown 00:17:47.647 Deallocate in Write Zeroes: Not Supported 00:17:47.647 Deallocated Guard Field: 0xFFFF 00:17:47.647 Flush: Supported 00:17:47.647 Reservation: Supported 00:17:47.648 Namespace Sharing Capabilities: Multiple Controllers 00:17:47.648 Size (in LBAs): 131072 (0GiB) 00:17:47.648 Capacity (in LBAs): 131072 (0GiB) 00:17:47.648 Utilization (in LBAs): 131072 (0GiB) 00:17:47.648 NGUID: AA6902F3D148450F994FC3C19D8ACA1C 00:17:47.648 UUID: aa6902f3-d148-450f-994f-c3c19d8aca1c 00:17:47.648 Thin Provisioning: Not Supported 00:17:47.648 Per-NS Atomic Units: Yes 00:17:47.648 Atomic Boundary Size (Normal): 0 00:17:47.648 Atomic Boundary Size (PFail): 0 00:17:47.648 Atomic Boundary Offset: 0 00:17:47.648 Maximum Single Source Range Length: 65535 00:17:47.648 Maximum Copy Length: 65535 00:17:47.648 Maximum Source Range Count: 1 00:17:47.648 NGUID/EUI64 Never Reused: No 00:17:47.648 Namespace Write Protected: No 00:17:47.648 Number of LBA Formats: 1 00:17:47.648 Current LBA Format: LBA Format #00 00:17:47.648 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:47.648 00:17:47.648 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:47.648 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.648 [2024-07-25 12:30:21.042787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:52.932 Initializing NVMe Controllers 00:17:52.932 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:52.932 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:52.932 Initialization complete. Launching workers. 00:17:52.932 ======================================================== 00:17:52.932 Latency(us) 00:17:52.932 Device Information : IOPS MiB/s Average min max 00:17:52.932 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16134.69 63.03 7938.57 3034.56 18067.41 00:17:52.932 ======================================================== 00:17:52.932 Total : 16134.69 63.03 7938.57 3034.56 18067.41 00:17:52.932 00:17:52.932 [2024-07-25 12:30:26.071281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:52.932 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:52.932 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.932 [2024-07-25 12:30:26.309706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:58.219 Initializing NVMe Controllers 00:17:58.219 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:58.219 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:58.219 Initialization complete. Launching workers. 00:17:58.219 ======================================================== 00:17:58.219 Latency(us) 00:17:58.219 Device Information : IOPS MiB/s Average min max 00:17:58.219 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.51 62.50 8004.65 5031.37 15450.01 00:17:58.219 ======================================================== 00:17:58.219 Total : 16000.51 62.50 8004.65 5031.37 15450.01 00:17:58.219 00:17:58.219 [2024-07-25 12:30:31.351362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:58.219 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:58.219 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.219 [2024-07-25 12:30:31.631033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:03.501 [2024-07-25 12:30:36.711050] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:03.501 Initializing NVMe Controllers 00:18:03.501 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:03.501 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:03.501 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:03.501 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:03.501 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:03.501 Initialization complete. Launching workers. 00:18:03.501 Starting thread on core 2 00:18:03.501 Starting thread on core 3 00:18:03.501 Starting thread on core 1 00:18:03.501 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:03.501 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.761 [2024-07-25 12:30:37.005285] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:07.057 [2024-07-25 12:30:40.070535] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:07.057 Initializing NVMe Controllers 00:18:07.057 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.057 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:07.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:07.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:07.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:07.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:07.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:07.057 Initialization complete. Launching workers. 00:18:07.057 Starting thread on core 1 with urgent priority queue 00:18:07.057 Starting thread on core 2 with urgent priority queue 00:18:07.057 Starting thread on core 3 with urgent priority queue 00:18:07.057 Starting thread on core 0 with urgent priority queue 00:18:07.057 SPDK bdev Controller (SPDK1 ) core 0: 4377.33 IO/s 22.84 secs/100000 ios 00:18:07.057 SPDK bdev Controller (SPDK1 ) core 1: 4718.33 IO/s 21.19 secs/100000 ios 00:18:07.057 SPDK bdev Controller (SPDK1 ) core 2: 7650.33 IO/s 13.07 secs/100000 ios 00:18:07.057 SPDK bdev Controller (SPDK1 ) core 3: 9082.33 IO/s 11.01 secs/100000 ios 00:18:07.057 ======================================================== 00:18:07.057 00:18:07.057 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:07.057 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.057 [2024-07-25 12:30:40.341455] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:07.057 Initializing NVMe Controllers 00:18:07.057 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.057 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.057 Namespace ID: 1 size: 0GB 00:18:07.057 Initialization complete. 00:18:07.057 INFO: using host memory buffer for IO 00:18:07.057 Hello world! 00:18:07.057 [2024-07-25 12:30:40.373958] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:07.057 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:07.057 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.317 [2024-07-25 12:30:40.629665] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.258 [2024-07-25 12:30:41.649538] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.519 Initializing NVMe Controllers 00:18:08.519 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.519 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.519 Initialization complete. Launching workers. 00:18:08.519 submit (in ns) avg, min, max = 6930.3, 3651.5, 3999126.9 00:18:08.519 complete (in ns) avg, min, max = 54688.8, 2202.3, 4994861.5 00:18:08.519 00:18:08.519 Submit histogram 00:18:08.519 ================ 00:18:08.519 Range in us Cumulative Count 00:18:08.519 3.643 - 3.668: 0.8439% ( 54) 00:18:08.519 3.668 - 3.692: 4.7195% ( 248) 00:18:08.519 3.692 - 3.717: 13.0958% ( 536) 00:18:08.519 3.717 - 3.742: 22.7536% ( 618) 00:18:08.519 3.742 - 3.766: 33.7084% ( 701) 00:18:08.519 3.766 - 3.791: 45.0852% ( 728) 00:18:08.519 3.791 - 3.815: 60.4782% ( 985) 00:18:08.519 3.815 - 3.840: 75.6993% ( 974) 00:18:08.519 3.840 - 3.865: 87.7481% ( 771) 00:18:08.519 3.865 - 3.889: 95.2493% ( 480) 00:18:08.519 3.889 - 3.914: 98.1560% ( 186) 00:18:08.519 3.914 - 3.938: 99.0624% ( 58) 00:18:08.519 3.938 - 3.963: 99.3905% ( 21) 00:18:08.519 3.963 - 3.988: 99.5312% ( 9) 00:18:08.519 3.988 - 4.012: 99.5937% ( 4) 00:18:08.519 4.209 - 4.234: 99.6093% ( 1) 00:18:08.519 5.194 - 5.218: 99.6249% ( 1) 00:18:08.519 6.498 - 6.548: 99.6406% ( 1) 00:18:08.519 6.745 - 6.794: 99.6562% ( 1) 00:18:08.519 6.892 - 6.942: 99.6718% ( 1) 00:18:08.519 6.942 - 6.991: 99.6875% ( 1) 00:18:08.519 6.991 - 7.040: 99.7031% ( 1) 00:18:08.519 7.089 - 7.138: 99.7187% ( 1) 00:18:08.519 7.188 - 7.237: 99.7343% ( 1) 00:18:08.519 7.286 - 7.335: 99.7656% ( 2) 00:18:08.519 7.335 - 7.385: 99.7812% ( 1) 00:18:08.519 7.532 - 7.582: 99.7968% ( 1) 00:18:08.519 7.729 - 7.778: 99.8125% ( 1) 00:18:08.519 7.877 - 7.926: 99.8281% ( 1) 00:18:08.519 8.123 - 8.172: 99.8437% ( 1) 00:18:08.519 8.172 - 8.222: 99.8594% ( 1) 00:18:08.519 8.615 - 8.665: 99.8750% ( 1) 00:18:08.519 9.551 - 9.600: 99.8906% ( 1) 00:18:08.519 13.588 - 13.686: 99.9062% ( 1) 00:18:08.519 29.538 - 29.735: 99.9219% ( 1) 00:18:08.519 3982.572 - 4007.778: 100.0000% ( 5) 00:18:08.519 00:18:08.519 Complete histogram 00:18:08.519 ================== 00:18:08.519 Range in us Cumulative Count 00:18:08.519 2.191 - 2.203: 0.0156% ( 1) 00:18:08.519 2.203 - 2.215: 3.7037% ( 236) 00:18:08.519 2.215 - 2.228: 4.0788% ( 24) 00:18:08.519 2.228 - 2.240: 5.0477% ( 62) 00:18:08.519 2.240 - 2.252: 5.5165% ( 30) 00:18:08.519 2.252 - 2.265: 5.5321% ( 1) 00:18:08.519 2.265 - 2.277: 32.7707% ( 1743) 00:18:08.519 2.277 - 2.289: 60.0563% ( 1746) 00:18:08.519 2.289 - 2.302: 67.4949% ( 476) 00:18:08.519 2.302 - 2.314: 83.6381% ( 1033) 00:18:08.519 2.314 - 2.326: 88.8420% ( 333) 00:18:08.519 2.326 - 2.338: 90.2485% ( 90) 00:18:08.519 2.338 - 2.351: 90.6548% ( 26) 00:18:08.519 2.351 - 2.363: 90.7954% ( 9) 00:18:08.519 2.363 - 2.375: 91.8581% ( 68) 00:18:08.519 2.375 - 2.388: 94.4210% ( 164) 00:18:08.519 2.388 - 2.400: 96.4213% ( 128) 00:18:08.520 2.400 - 2.412: 97.5309% ( 71) 00:18:08.520 2.412 - 2.425: 98.2966% ( 49) 00:18:08.520 2.425 - 2.437: 98.5310% ( 15) 00:18:08.520 2.437 - 2.449: 98.5779% ( 3) 00:18:08.520 2.474 - 2.486: 98.5935% ( 1) 00:18:08.520 5.243 - 5.268: 98.6092% ( 1) 00:18:08.520 5.342 - 5.366: 98.6248% ( 1) 00:18:08.520 5.637 - 5.662: 98.6404% ( 1) 00:18:08.520 6.006 - 6.031: 98.6560% ( 1) 00:18:08.520 6.400 - 6.449: 98.6717% ( 1) 00:18:08.520 39.975 - 40.172: 98.6873% ( 1) 00:18:08.520 2634.043 - 2646.646: 98.7029% ( 1) 00:18:08.520 3982.572 - 4007.778: 99.9844% ( 82) 00:18:08.520 4990.818 - 5016.025: 100.0000% ( 1) 00:18:08.520 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:08.520 [ 00:18:08.520 { 00:18:08.520 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:08.520 "subtype": "Discovery", 00:18:08.520 "listen_addresses": [], 00:18:08.520 "allow_any_host": true, 00:18:08.520 "hosts": [] 00:18:08.520 }, 00:18:08.520 { 00:18:08.520 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:08.520 "subtype": "NVMe", 00:18:08.520 "listen_addresses": [ 00:18:08.520 { 00:18:08.520 "trtype": "VFIOUSER", 00:18:08.520 "adrfam": "IPv4", 00:18:08.520 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:08.520 "trsvcid": "0" 00:18:08.520 } 00:18:08.520 ], 00:18:08.520 "allow_any_host": true, 00:18:08.520 "hosts": [], 00:18:08.520 "serial_number": "SPDK1", 00:18:08.520 "model_number": "SPDK bdev Controller", 00:18:08.520 "max_namespaces": 32, 00:18:08.520 "min_cntlid": 1, 00:18:08.520 "max_cntlid": 65519, 00:18:08.520 "namespaces": [ 00:18:08.520 { 00:18:08.520 "nsid": 1, 00:18:08.520 "bdev_name": "Malloc1", 00:18:08.520 "name": "Malloc1", 00:18:08.520 "nguid": "AA6902F3D148450F994FC3C19D8ACA1C", 00:18:08.520 "uuid": "aa6902f3-d148-450f-994f-c3c19d8aca1c" 00:18:08.520 } 00:18:08.520 ] 00:18:08.520 }, 00:18:08.520 { 00:18:08.520 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:08.520 "subtype": "NVMe", 00:18:08.520 "listen_addresses": [ 00:18:08.520 { 00:18:08.520 "trtype": "VFIOUSER", 00:18:08.520 "adrfam": "IPv4", 00:18:08.520 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:08.520 "trsvcid": "0" 00:18:08.520 } 00:18:08.520 ], 00:18:08.520 "allow_any_host": true, 00:18:08.520 "hosts": [], 00:18:08.520 "serial_number": "SPDK2", 00:18:08.520 "model_number": "SPDK bdev Controller", 00:18:08.520 "max_namespaces": 32, 00:18:08.520 "min_cntlid": 1, 00:18:08.520 "max_cntlid": 65519, 00:18:08.520 "namespaces": [ 00:18:08.520 { 00:18:08.520 "nsid": 1, 00:18:08.520 "bdev_name": "Malloc2", 00:18:08.520 "name": "Malloc2", 00:18:08.520 "nguid": "76EFBB01CD474DCE9074FE24CC6D7171", 00:18:08.520 "uuid": "76efbb01-cd47-4dce-9074-fe24cc6d7171" 00:18:08.520 } 00:18:08.520 ] 00:18:08.520 } 00:18:08.520 ] 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=410532 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:08.520 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:08.781 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.781 [2024-07-25 12:30:42.073363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.781 Malloc3 00:18:08.781 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:09.041 [2024-07-25 12:30:42.306081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:09.041 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:09.041 Asynchronous Event Request test 00:18:09.041 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:09.041 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:09.041 Registering asynchronous event callbacks... 00:18:09.041 Starting namespace attribute notice tests for all controllers... 00:18:09.041 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:09.041 aer_cb - Changed Namespace 00:18:09.042 Cleaning up... 00:18:09.304 [ 00:18:09.304 { 00:18:09.304 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:09.304 "subtype": "Discovery", 00:18:09.304 "listen_addresses": [], 00:18:09.304 "allow_any_host": true, 00:18:09.304 "hosts": [] 00:18:09.304 }, 00:18:09.304 { 00:18:09.304 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:09.304 "subtype": "NVMe", 00:18:09.304 "listen_addresses": [ 00:18:09.304 { 00:18:09.304 "trtype": "VFIOUSER", 00:18:09.304 "adrfam": "IPv4", 00:18:09.304 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:09.304 "trsvcid": "0" 00:18:09.304 } 00:18:09.304 ], 00:18:09.304 "allow_any_host": true, 00:18:09.304 "hosts": [], 00:18:09.304 "serial_number": "SPDK1", 00:18:09.304 "model_number": "SPDK bdev Controller", 00:18:09.304 "max_namespaces": 32, 00:18:09.304 "min_cntlid": 1, 00:18:09.304 "max_cntlid": 65519, 00:18:09.304 "namespaces": [ 00:18:09.304 { 00:18:09.304 "nsid": 1, 00:18:09.304 "bdev_name": "Malloc1", 00:18:09.304 "name": "Malloc1", 00:18:09.304 "nguid": "AA6902F3D148450F994FC3C19D8ACA1C", 00:18:09.304 "uuid": "aa6902f3-d148-450f-994f-c3c19d8aca1c" 00:18:09.304 }, 00:18:09.304 { 00:18:09.304 "nsid": 2, 00:18:09.304 "bdev_name": "Malloc3", 00:18:09.304 "name": "Malloc3", 00:18:09.304 "nguid": "675AD86D10DE4793BFBD06B184341EF4", 00:18:09.304 "uuid": "675ad86d-10de-4793-bfbd-06b184341ef4" 00:18:09.304 } 00:18:09.304 ] 00:18:09.304 }, 00:18:09.304 { 00:18:09.304 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:09.304 "subtype": "NVMe", 00:18:09.304 "listen_addresses": [ 00:18:09.304 { 00:18:09.304 "trtype": "VFIOUSER", 00:18:09.304 "adrfam": "IPv4", 00:18:09.304 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:09.304 "trsvcid": "0" 00:18:09.304 } 00:18:09.304 ], 00:18:09.304 "allow_any_host": true, 00:18:09.304 "hosts": [], 00:18:09.304 "serial_number": "SPDK2", 00:18:09.304 "model_number": "SPDK bdev Controller", 00:18:09.304 "max_namespaces": 32, 00:18:09.304 "min_cntlid": 1, 00:18:09.304 "max_cntlid": 65519, 00:18:09.304 "namespaces": [ 00:18:09.304 { 00:18:09.304 "nsid": 1, 00:18:09.304 "bdev_name": "Malloc2", 00:18:09.304 "name": "Malloc2", 00:18:09.304 "nguid": "76EFBB01CD474DCE9074FE24CC6D7171", 00:18:09.304 "uuid": "76efbb01-cd47-4dce-9074-fe24cc6d7171" 00:18:09.304 } 00:18:09.304 ] 00:18:09.304 } 00:18:09.304 ] 00:18:09.304 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 410532 00:18:09.304 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:09.304 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:09.304 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:09.304 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:09.304 [2024-07-25 12:30:42.569332] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:18:09.304 [2024-07-25 12:30:42.569378] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410694 ] 00:18:09.304 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.304 [2024-07-25 12:30:42.601247] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:09.304 [2024-07-25 12:30:42.606751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:09.304 [2024-07-25 12:30:42.606772] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5ca590c000 00:18:09.304 [2024-07-25 12:30:42.607747] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.304 [2024-07-25 12:30:42.608750] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.304 [2024-07-25 12:30:42.609762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.304 [2024-07-25 12:30:42.610766] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.304 [2024-07-25 12:30:42.611775] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.304 [2024-07-25 12:30:42.612775] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.304 [2024-07-25 12:30:42.613787] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.304 [2024-07-25 12:30:42.614796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.304 [2024-07-25 12:30:42.615814] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:09.304 [2024-07-25 12:30:42.615823] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5ca5901000 00:18:09.304 [2024-07-25 12:30:42.617049] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:09.304 [2024-07-25 12:30:42.636181] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:09.304 [2024-07-25 12:30:42.636216] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:09.304 [2024-07-25 12:30:42.638277] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:09.304 [2024-07-25 12:30:42.638323] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:09.304 [2024-07-25 12:30:42.638405] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:09.304 [2024-07-25 12:30:42.638419] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:09.304 [2024-07-25 12:30:42.638424] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:09.304 [2024-07-25 12:30:42.639276] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:09.304 [2024-07-25 12:30:42.639288] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:09.304 [2024-07-25 12:30:42.639295] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:09.304 [2024-07-25 12:30:42.640284] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:09.304 [2024-07-25 12:30:42.640299] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:09.304 [2024-07-25 12:30:42.640306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:09.304 [2024-07-25 12:30:42.641286] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:09.304 [2024-07-25 12:30:42.641295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:09.304 [2024-07-25 12:30:42.642298] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:09.304 [2024-07-25 12:30:42.642307] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:09.305 [2024-07-25 12:30:42.642312] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:09.305 [2024-07-25 12:30:42.642319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:09.305 [2024-07-25 12:30:42.642424] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:09.305 [2024-07-25 12:30:42.642429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:09.305 [2024-07-25 12:30:42.642434] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:09.305 [2024-07-25 12:30:42.643297] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:09.305 [2024-07-25 12:30:42.644310] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:09.305 [2024-07-25 12:30:42.645321] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:09.305 [2024-07-25 12:30:42.646320] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:09.305 [2024-07-25 12:30:42.646358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:09.305 [2024-07-25 12:30:42.647332] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:09.305 [2024-07-25 12:30:42.647340] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:09.305 [2024-07-25 12:30:42.647345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.647364] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:09.305 [2024-07-25 12:30:42.647371] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.647383] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.305 [2024-07-25 12:30:42.647388] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.305 [2024-07-25 12:30:42.647391] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.305 [2024-07-25 12:30:42.647403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.305 [2024-07-25 12:30:42.653555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:09.305 [2024-07-25 12:30:42.653566] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:09.305 [2024-07-25 12:30:42.653571] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:09.305 [2024-07-25 12:30:42.653575] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:09.305 [2024-07-25 12:30:42.653580] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:09.305 [2024-07-25 12:30:42.653584] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:09.305 [2024-07-25 12:30:42.653588] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:09.305 [2024-07-25 12:30:42.653593] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.653600] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.653611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:09.305 [2024-07-25 12:30:42.661554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:09.305 [2024-07-25 12:30:42.661568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.305 [2024-07-25 12:30:42.661576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.305 [2024-07-25 12:30:42.661584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.305 [2024-07-25 12:30:42.661591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.305 [2024-07-25 12:30:42.661596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.661604] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.661612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:09.305 [2024-07-25 12:30:42.669553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:09.305 [2024-07-25 12:30:42.669560] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:09.305 [2024-07-25 12:30:42.669565] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.669574] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.669579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.669588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:09.305 [2024-07-25 12:30:42.677553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:09.305 [2024-07-25 12:30:42.677616] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.677624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.677631] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:09.305 [2024-07-25 12:30:42.677635] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:09.305 [2024-07-25 12:30:42.677639] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.305 [2024-07-25 12:30:42.677645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:09.305 [2024-07-25 12:30:42.685553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:09.305 [2024-07-25 12:30:42.685564] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:09.305 [2024-07-25 12:30:42.685576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.685583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.685590] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.305 [2024-07-25 12:30:42.685593] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.305 [2024-07-25 12:30:42.685596] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.305 [2024-07-25 12:30:42.685602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.305 [2024-07-25 12:30:42.693553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:09.305 [2024-07-25 12:30:42.693565] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.693573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.693580] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.305 [2024-07-25 12:30:42.693584] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.305 [2024-07-25 12:30:42.693587] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.305 [2024-07-25 12:30:42.693592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.305 [2024-07-25 12:30:42.701556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:09.305 [2024-07-25 12:30:42.701564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.701571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.701578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.701587] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.701594] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:09.305 [2024-07-25 12:30:42.701599] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:09.306 [2024-07-25 12:30:42.701604] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:09.306 [2024-07-25 12:30:42.701608] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:09.306 [2024-07-25 12:30:42.701613] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:09.306 [2024-07-25 12:30:42.701627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:09.306 [2024-07-25 12:30:42.709551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:09.306 [2024-07-25 12:30:42.709564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:09.306 [2024-07-25 12:30:42.717553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:09.306 [2024-07-25 12:30:42.717565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:09.567 [2024-07-25 12:30:42.725553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:09.567 [2024-07-25 12:30:42.725566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:09.567 [2024-07-25 12:30:42.731552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:09.567 [2024-07-25 12:30:42.731569] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:09.567 [2024-07-25 12:30:42.731573] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:09.567 [2024-07-25 12:30:42.731576] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:09.567 [2024-07-25 12:30:42.731580] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:09.567 [2024-07-25 12:30:42.731583] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:09.567 [2024-07-25 12:30:42.731588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:09.567 [2024-07-25 12:30:42.731596] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:09.567 [2024-07-25 12:30:42.731600] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:09.567 [2024-07-25 12:30:42.731603] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.567 [2024-07-25 12:30:42.731608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:09.567 [2024-07-25 12:30:42.731615] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:09.567 [2024-07-25 12:30:42.731619] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.567 [2024-07-25 12:30:42.731622] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.567 [2024-07-25 12:30:42.731627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.567 [2024-07-25 12:30:42.731635] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:09.567 [2024-07-25 12:30:42.731641] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:09.567 [2024-07-25 12:30:42.731644] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.567 [2024-07-25 12:30:42.731649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:09.567 [2024-07-25 12:30:42.741554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:09.567 [2024-07-25 12:30:42.741567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:09.567 [2024-07-25 12:30:42.741577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:09.567 [2024-07-25 12:30:42.741583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:09.567 ===================================================== 00:18:09.567 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:09.567 ===================================================== 00:18:09.567 Controller Capabilities/Features 00:18:09.567 ================================ 00:18:09.567 Vendor ID: 4e58 00:18:09.567 Subsystem Vendor ID: 4e58 00:18:09.567 Serial Number: SPDK2 00:18:09.567 Model Number: SPDK bdev Controller 00:18:09.567 Firmware Version: 24.09 00:18:09.567 Recommended Arb Burst: 6 00:18:09.567 IEEE OUI Identifier: 8d 6b 50 00:18:09.567 Multi-path I/O 00:18:09.567 May have multiple subsystem ports: Yes 00:18:09.567 May have multiple controllers: Yes 00:18:09.567 Associated with SR-IOV VF: No 00:18:09.567 Max Data Transfer Size: 131072 00:18:09.567 Max Number of Namespaces: 32 00:18:09.567 Max Number of I/O Queues: 127 00:18:09.567 NVMe Specification Version (VS): 1.3 00:18:09.567 NVMe Specification Version (Identify): 1.3 00:18:09.567 Maximum Queue Entries: 256 00:18:09.567 Contiguous Queues Required: Yes 00:18:09.567 Arbitration Mechanisms Supported 00:18:09.567 Weighted Round Robin: Not Supported 00:18:09.567 Vendor Specific: Not Supported 00:18:09.567 Reset Timeout: 15000 ms 00:18:09.567 Doorbell Stride: 4 bytes 00:18:09.567 NVM Subsystem Reset: Not Supported 00:18:09.567 Command Sets Supported 00:18:09.567 NVM Command Set: Supported 00:18:09.567 Boot Partition: Not Supported 00:18:09.567 Memory Page Size Minimum: 4096 bytes 00:18:09.567 Memory Page Size Maximum: 4096 bytes 00:18:09.567 Persistent Memory Region: Not Supported 00:18:09.567 Optional Asynchronous Events Supported 00:18:09.567 Namespace Attribute Notices: Supported 00:18:09.567 Firmware Activation Notices: Not Supported 00:18:09.567 ANA Change Notices: Not Supported 00:18:09.567 PLE Aggregate Log Change Notices: Not Supported 00:18:09.567 LBA Status Info Alert Notices: Not Supported 00:18:09.567 EGE Aggregate Log Change Notices: Not Supported 00:18:09.567 Normal NVM Subsystem Shutdown event: Not Supported 00:18:09.567 Zone Descriptor Change Notices: Not Supported 00:18:09.567 Discovery Log Change Notices: Not Supported 00:18:09.567 Controller Attributes 00:18:09.567 128-bit Host Identifier: Supported 00:18:09.567 Non-Operational Permissive Mode: Not Supported 00:18:09.567 NVM Sets: Not Supported 00:18:09.567 Read Recovery Levels: Not Supported 00:18:09.567 Endurance Groups: Not Supported 00:18:09.567 Predictable Latency Mode: Not Supported 00:18:09.567 Traffic Based Keep ALive: Not Supported 00:18:09.567 Namespace Granularity: Not Supported 00:18:09.567 SQ Associations: Not Supported 00:18:09.567 UUID List: Not Supported 00:18:09.567 Multi-Domain Subsystem: Not Supported 00:18:09.568 Fixed Capacity Management: Not Supported 00:18:09.568 Variable Capacity Management: Not Supported 00:18:09.568 Delete Endurance Group: Not Supported 00:18:09.568 Delete NVM Set: Not Supported 00:18:09.568 Extended LBA Formats Supported: Not Supported 00:18:09.568 Flexible Data Placement Supported: Not Supported 00:18:09.568 00:18:09.568 Controller Memory Buffer Support 00:18:09.568 ================================ 00:18:09.568 Supported: No 00:18:09.568 00:18:09.568 Persistent Memory Region Support 00:18:09.568 ================================ 00:18:09.568 Supported: No 00:18:09.568 00:18:09.568 Admin Command Set Attributes 00:18:09.568 ============================ 00:18:09.568 Security Send/Receive: Not Supported 00:18:09.568 Format NVM: Not Supported 00:18:09.568 Firmware Activate/Download: Not Supported 00:18:09.568 Namespace Management: Not Supported 00:18:09.568 Device Self-Test: Not Supported 00:18:09.568 Directives: Not Supported 00:18:09.568 NVMe-MI: Not Supported 00:18:09.568 Virtualization Management: Not Supported 00:18:09.568 Doorbell Buffer Config: Not Supported 00:18:09.568 Get LBA Status Capability: Not Supported 00:18:09.568 Command & Feature Lockdown Capability: Not Supported 00:18:09.568 Abort Command Limit: 4 00:18:09.568 Async Event Request Limit: 4 00:18:09.568 Number of Firmware Slots: N/A 00:18:09.568 Firmware Slot 1 Read-Only: N/A 00:18:09.568 Firmware Activation Without Reset: N/A 00:18:09.568 Multiple Update Detection Support: N/A 00:18:09.568 Firmware Update Granularity: No Information Provided 00:18:09.568 Per-Namespace SMART Log: No 00:18:09.568 Asymmetric Namespace Access Log Page: Not Supported 00:18:09.568 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:09.568 Command Effects Log Page: Supported 00:18:09.568 Get Log Page Extended Data: Supported 00:18:09.568 Telemetry Log Pages: Not Supported 00:18:09.568 Persistent Event Log Pages: Not Supported 00:18:09.568 Supported Log Pages Log Page: May Support 00:18:09.568 Commands Supported & Effects Log Page: Not Supported 00:18:09.568 Feature Identifiers & Effects Log Page:May Support 00:18:09.568 NVMe-MI Commands & Effects Log Page: May Support 00:18:09.568 Data Area 4 for Telemetry Log: Not Supported 00:18:09.568 Error Log Page Entries Supported: 128 00:18:09.568 Keep Alive: Supported 00:18:09.568 Keep Alive Granularity: 10000 ms 00:18:09.568 00:18:09.568 NVM Command Set Attributes 00:18:09.568 ========================== 00:18:09.568 Submission Queue Entry Size 00:18:09.568 Max: 64 00:18:09.568 Min: 64 00:18:09.568 Completion Queue Entry Size 00:18:09.568 Max: 16 00:18:09.568 Min: 16 00:18:09.568 Number of Namespaces: 32 00:18:09.568 Compare Command: Supported 00:18:09.568 Write Uncorrectable Command: Not Supported 00:18:09.568 Dataset Management Command: Supported 00:18:09.568 Write Zeroes Command: Supported 00:18:09.568 Set Features Save Field: Not Supported 00:18:09.568 Reservations: Not Supported 00:18:09.568 Timestamp: Not Supported 00:18:09.568 Copy: Supported 00:18:09.568 Volatile Write Cache: Present 00:18:09.568 Atomic Write Unit (Normal): 1 00:18:09.568 Atomic Write Unit (PFail): 1 00:18:09.568 Atomic Compare & Write Unit: 1 00:18:09.568 Fused Compare & Write: Supported 00:18:09.568 Scatter-Gather List 00:18:09.568 SGL Command Set: Supported (Dword aligned) 00:18:09.568 SGL Keyed: Not Supported 00:18:09.568 SGL Bit Bucket Descriptor: Not Supported 00:18:09.568 SGL Metadata Pointer: Not Supported 00:18:09.568 Oversized SGL: Not Supported 00:18:09.568 SGL Metadata Address: Not Supported 00:18:09.568 SGL Offset: Not Supported 00:18:09.568 Transport SGL Data Block: Not Supported 00:18:09.568 Replay Protected Memory Block: Not Supported 00:18:09.568 00:18:09.568 Firmware Slot Information 00:18:09.568 ========================= 00:18:09.568 Active slot: 1 00:18:09.568 Slot 1 Firmware Revision: 24.09 00:18:09.568 00:18:09.568 00:18:09.568 Commands Supported and Effects 00:18:09.568 ============================== 00:18:09.568 Admin Commands 00:18:09.568 -------------- 00:18:09.568 Get Log Page (02h): Supported 00:18:09.568 Identify (06h): Supported 00:18:09.568 Abort (08h): Supported 00:18:09.568 Set Features (09h): Supported 00:18:09.568 Get Features (0Ah): Supported 00:18:09.568 Asynchronous Event Request (0Ch): Supported 00:18:09.568 Keep Alive (18h): Supported 00:18:09.568 I/O Commands 00:18:09.568 ------------ 00:18:09.568 Flush (00h): Supported LBA-Change 00:18:09.568 Write (01h): Supported LBA-Change 00:18:09.568 Read (02h): Supported 00:18:09.568 Compare (05h): Supported 00:18:09.568 Write Zeroes (08h): Supported LBA-Change 00:18:09.568 Dataset Management (09h): Supported LBA-Change 00:18:09.568 Copy (19h): Supported LBA-Change 00:18:09.568 00:18:09.568 Error Log 00:18:09.568 ========= 00:18:09.568 00:18:09.568 Arbitration 00:18:09.568 =========== 00:18:09.568 Arbitration Burst: 1 00:18:09.568 00:18:09.568 Power Management 00:18:09.568 ================ 00:18:09.568 Number of Power States: 1 00:18:09.568 Current Power State: Power State #0 00:18:09.568 Power State #0: 00:18:09.568 Max Power: 0.00 W 00:18:09.568 Non-Operational State: Operational 00:18:09.568 Entry Latency: Not Reported 00:18:09.568 Exit Latency: Not Reported 00:18:09.568 Relative Read Throughput: 0 00:18:09.568 Relative Read Latency: 0 00:18:09.568 Relative Write Throughput: 0 00:18:09.568 Relative Write Latency: 0 00:18:09.568 Idle Power: Not Reported 00:18:09.568 Active Power: Not Reported 00:18:09.568 Non-Operational Permissive Mode: Not Supported 00:18:09.568 00:18:09.568 Health Information 00:18:09.568 ================== 00:18:09.568 Critical Warnings: 00:18:09.568 Available Spare Space: OK 00:18:09.568 Temperature: OK 00:18:09.568 Device Reliability: OK 00:18:09.568 Read Only: No 00:18:09.568 Volatile Memory Backup: OK 00:18:09.568 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:09.568 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:09.568 Available Spare: 0% 00:18:09.568 Available Sp[2024-07-25 12:30:42.741676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:09.568 [2024-07-25 12:30:42.749552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:09.568 [2024-07-25 12:30:42.749580] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:09.568 [2024-07-25 12:30:42.749589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.568 [2024-07-25 12:30:42.749595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.568 [2024-07-25 12:30:42.749601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.568 [2024-07-25 12:30:42.749607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.568 [2024-07-25 12:30:42.751588] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:09.568 [2024-07-25 12:30:42.751598] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:09.568 [2024-07-25 12:30:42.751670] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:09.568 [2024-07-25 12:30:42.751715] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:09.568 [2024-07-25 12:30:42.751721] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:09.568 [2024-07-25 12:30:42.752672] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:09.568 [2024-07-25 12:30:42.752683] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:09.568 [2024-07-25 12:30:42.752734] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:09.568 [2024-07-25 12:30:42.754018] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:09.568 are Threshold: 0% 00:18:09.568 Life Percentage Used: 0% 00:18:09.568 Data Units Read: 0 00:18:09.568 Data Units Written: 0 00:18:09.568 Host Read Commands: 0 00:18:09.568 Host Write Commands: 0 00:18:09.568 Controller Busy Time: 0 minutes 00:18:09.568 Power Cycles: 0 00:18:09.568 Power On Hours: 0 hours 00:18:09.568 Unsafe Shutdowns: 0 00:18:09.568 Unrecoverable Media Errors: 0 00:18:09.568 Lifetime Error Log Entries: 0 00:18:09.568 Warning Temperature Time: 0 minutes 00:18:09.568 Critical Temperature Time: 0 minutes 00:18:09.568 00:18:09.568 Number of Queues 00:18:09.568 ================ 00:18:09.568 Number of I/O Submission Queues: 127 00:18:09.568 Number of I/O Completion Queues: 127 00:18:09.568 00:18:09.568 Active Namespaces 00:18:09.568 ================= 00:18:09.568 Namespace ID:1 00:18:09.568 Error Recovery Timeout: Unlimited 00:18:09.568 Command Set Identifier: NVM (00h) 00:18:09.568 Deallocate: Supported 00:18:09.568 Deallocated/Unwritten Error: Not Supported 00:18:09.568 Deallocated Read Value: Unknown 00:18:09.568 Deallocate in Write Zeroes: Not Supported 00:18:09.568 Deallocated Guard Field: 0xFFFF 00:18:09.568 Flush: Supported 00:18:09.568 Reservation: Supported 00:18:09.569 Namespace Sharing Capabilities: Multiple Controllers 00:18:09.569 Size (in LBAs): 131072 (0GiB) 00:18:09.569 Capacity (in LBAs): 131072 (0GiB) 00:18:09.569 Utilization (in LBAs): 131072 (0GiB) 00:18:09.569 NGUID: 76EFBB01CD474DCE9074FE24CC6D7171 00:18:09.569 UUID: 76efbb01-cd47-4dce-9074-fe24cc6d7171 00:18:09.569 Thin Provisioning: Not Supported 00:18:09.569 Per-NS Atomic Units: Yes 00:18:09.569 Atomic Boundary Size (Normal): 0 00:18:09.569 Atomic Boundary Size (PFail): 0 00:18:09.569 Atomic Boundary Offset: 0 00:18:09.569 Maximum Single Source Range Length: 65535 00:18:09.569 Maximum Copy Length: 65535 00:18:09.569 Maximum Source Range Count: 1 00:18:09.569 NGUID/EUI64 Never Reused: No 00:18:09.569 Namespace Write Protected: No 00:18:09.569 Number of LBA Formats: 1 00:18:09.569 Current LBA Format: LBA Format #00 00:18:09.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:09.569 00:18:09.569 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:09.569 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.569 [2024-07-25 12:30:42.966595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:14.855 Initializing NVMe Controllers 00:18:14.855 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:14.855 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:14.855 Initialization complete. Launching workers. 00:18:14.855 ======================================================== 00:18:14.855 Latency(us) 00:18:14.855 Device Information : IOPS MiB/s Average min max 00:18:14.855 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 16182.44 63.21 7918.53 3014.87 17997.08 00:18:14.855 ======================================================== 00:18:14.855 Total : 16182.44 63.21 7918.53 3014.87 17997.08 00:18:14.855 00:18:14.855 [2024-07-25 12:30:48.079794] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:14.855 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:14.855 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.115 [2024-07-25 12:30:48.326935] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:20.396 Initializing NVMe Controllers 00:18:20.396 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:20.396 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:20.396 Initialization complete. Launching workers. 00:18:20.396 ======================================================== 00:18:20.396 Latency(us) 00:18:20.396 Device Information : IOPS MiB/s Average min max 00:18:20.396 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34558.00 134.99 3705.80 1297.46 8467.11 00:18:20.396 ======================================================== 00:18:20.396 Total : 34558.00 134.99 3705.80 1297.46 8467.11 00:18:20.396 00:18:20.396 [2024-07-25 12:30:53.347706] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:20.396 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:20.396 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.396 [2024-07-25 12:30:53.628616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:25.761 [2024-07-25 12:30:58.784663] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:25.761 Initializing NVMe Controllers 00:18:25.761 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:25.761 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:25.761 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:25.761 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:25.761 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:25.761 Initialization complete. Launching workers. 00:18:25.761 Starting thread on core 2 00:18:25.761 Starting thread on core 3 00:18:25.761 Starting thread on core 1 00:18:25.761 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:25.761 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.761 [2024-07-25 12:30:59.089124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:29.054 [2024-07-25 12:31:02.155614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:29.054 Initializing NVMe Controllers 00:18:29.054 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.054 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.054 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:29.054 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:29.054 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:29.054 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:29.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:29.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:29.054 Initialization complete. Launching workers. 00:18:29.054 Starting thread on core 1 with urgent priority queue 00:18:29.054 Starting thread on core 2 with urgent priority queue 00:18:29.054 Starting thread on core 3 with urgent priority queue 00:18:29.054 Starting thread on core 0 with urgent priority queue 00:18:29.054 SPDK bdev Controller (SPDK2 ) core 0: 6945.67 IO/s 14.40 secs/100000 ios 00:18:29.054 SPDK bdev Controller (SPDK2 ) core 1: 4320.67 IO/s 23.14 secs/100000 ios 00:18:29.054 SPDK bdev Controller (SPDK2 ) core 2: 3908.33 IO/s 25.59 secs/100000 ios 00:18:29.054 SPDK bdev Controller (SPDK2 ) core 3: 8345.33 IO/s 11.98 secs/100000 ios 00:18:29.054 ======================================================== 00:18:29.054 00:18:29.054 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:29.054 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.054 [2024-07-25 12:31:02.422457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:29.054 Initializing NVMe Controllers 00:18:29.054 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.054 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.054 Namespace ID: 1 size: 0GB 00:18:29.054 Initialization complete. 00:18:29.054 INFO: using host memory buffer for IO 00:18:29.054 Hello world! 00:18:29.054 [2024-07-25 12:31:02.431107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:29.315 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:29.315 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.315 [2024-07-25 12:31:02.692679] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.695 Initializing NVMe Controllers 00:18:30.695 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.695 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.695 Initialization complete. Launching workers. 00:18:30.695 submit (in ns) avg, min, max = 8800.1, 3646.2, 3999886.2 00:18:30.695 complete (in ns) avg, min, max = 30534.2, 2193.8, 3999926.9 00:18:30.696 00:18:30.696 Submit histogram 00:18:30.696 ================ 00:18:30.696 Range in us Cumulative Count 00:18:30.696 3.643 - 3.668: 1.7720% ( 198) 00:18:30.696 3.668 - 3.692: 7.9739% ( 693) 00:18:30.696 3.692 - 3.717: 16.7174% ( 977) 00:18:30.696 3.717 - 3.742: 26.5975% ( 1104) 00:18:30.696 3.742 - 3.766: 36.4775% ( 1104) 00:18:30.696 3.766 - 3.791: 48.1833% ( 1308) 00:18:30.696 3.791 - 3.815: 63.6030% ( 1723) 00:18:30.696 3.815 - 3.840: 78.5932% ( 1675) 00:18:30.696 3.840 - 3.865: 90.3258% ( 1311) 00:18:30.696 3.865 - 3.889: 95.8565% ( 618) 00:18:30.696 3.889 - 3.914: 98.3086% ( 274) 00:18:30.696 3.914 - 3.938: 99.2572% ( 106) 00:18:30.696 3.938 - 3.963: 99.4899% ( 26) 00:18:30.696 3.963 - 3.988: 99.6152% ( 14) 00:18:30.696 3.988 - 4.012: 99.6241% ( 1) 00:18:30.696 4.086 - 4.111: 99.6331% ( 1) 00:18:30.696 4.234 - 4.258: 99.6420% ( 1) 00:18:30.696 4.652 - 4.677: 99.6510% ( 1) 00:18:30.696 5.711 - 5.735: 99.6599% ( 1) 00:18:30.696 5.957 - 5.982: 99.6778% ( 2) 00:18:30.696 6.080 - 6.105: 99.6868% ( 1) 00:18:30.696 6.203 - 6.228: 99.6957% ( 1) 00:18:30.696 6.277 - 6.302: 99.7047% ( 1) 00:18:30.696 6.548 - 6.597: 99.7136% ( 1) 00:18:30.696 6.646 - 6.695: 99.7226% ( 1) 00:18:30.696 6.843 - 6.892: 99.7405% ( 2) 00:18:30.696 6.892 - 6.942: 99.7494% ( 1) 00:18:30.696 6.942 - 6.991: 99.7584% ( 1) 00:18:30.696 7.089 - 7.138: 99.7673% ( 1) 00:18:30.696 7.237 - 7.286: 99.7763% ( 1) 00:18:30.696 7.532 - 7.582: 99.7852% ( 1) 00:18:30.696 7.778 - 7.828: 99.7942% ( 1) 00:18:30.696 7.877 - 7.926: 99.8121% ( 2) 00:18:30.696 7.975 - 8.025: 99.8300% ( 2) 00:18:30.696 8.566 - 8.615: 99.8479% ( 2) 00:18:30.696 8.714 - 8.763: 99.8568% ( 1) 00:18:30.696 10.683 - 10.732: 99.8658% ( 1) 00:18:30.696 11.323 - 11.372: 99.8747% ( 1) 00:18:30.696 3982.572 - 4007.778: 100.0000% ( 14) 00:18:30.696 00:18:30.696 Complete histogram 00:18:30.696 ================== 00:18:30.696 Range in us Cumulative Count 00:18:30.696 2.191 - 2.203: 0.1432% ( 16) 00:18:30.696 2.203 - 2.215: 1.9957% ( 207) 00:18:30.696 2.215 - 2.228: 2.1568% ( 18) 00:18:30.696 2.228 - 2.240: 2.5237% ( 41) 00:18:30.696 2.240 - 2.252: 2.7385% ( 24) 00:18:30.696 2.252 - 2.265: 22.1138% ( 2165) 00:18:30.696 2.265 - 2.277: 54.9311% ( 3667) 00:18:30.696 2.277 - 2.289: 60.9451% ( 672) 00:18:30.696 2.289 - 2.302: 78.1546% ( 1923) 00:18:30.696 2.302 - 2.314: 84.5534% ( 715) 00:18:30.696 2.314 - 2.326: 85.5289% ( 109) 00:18:30.696 2.326 - 2.338: 86.4417% ( 102) 00:18:30.696 2.338 - 2.351: 90.0394% ( 402) 00:18:30.696 2.351 - 2.363: 94.2008% ( 465) 00:18:30.696 2.363 - 2.375: 96.4113% ( 247) 00:18:30.696 2.375 - 2.388: 98.1027% ( 189) 00:18:30.696 2.388 - 2.400: 98.8276% ( 81) 00:18:30.696 2.400 - 2.412: 99.0245% ( 22) 00:18:30.696 2.412 - 2.425: 99.1051% ( 9) 00:18:30.696 2.425 - 2.437: 99.1140% ( 1) 00:18:30.696 2.449 - 2.462: 99.1230% ( 1) 00:18:30.696 2.523 - 2.535: 99.1319% ( 1) 00:18:30.696 3.126 - 3.138: 99.1409% ( 1) 00:18:30.696 4.332 - 4.357: 99.1498% ( 1) 00:18:30.696 4.455 - 4.480: 99.1588% ( 1) 00:18:30.696 4.554 - 4.578: 99.1677% ( 1) 00:18:30.696 4.603 - 4.628: 99.1767% ( 1) 00:18:30.696 4.775 - 4.800: 99.1856% ( 1) 00:18:30.696 5.169 - 5.194: 99.1946% ( 1) 00:18:30.696 5.415 - 5.440: 99.2035% ( 1) 00:18:30.696 5.489 - 5.514: 99.2125% ( 1) 00:18:30.696 5.563 - 5.588: 99.2214% ( 1) 00:18:30.696 5.711 - 5.735: 99.2304% ( 1) 00:18:30.696 5.735 - 5.760: 99.2393% ( 1) 00:18:30.696 5.760 - 5.785: 99.2483% ( 1) 00:18:30.696 6.080 - 6.105: 99.2572% ( 1) 00:18:30.696 6.277 - 6.302: 99.2662% ( 1) 00:18:30.696 6.400 - 6.449: 99.2751% ( 1) 00:18:30.696 6.892 - [2024-07-25 12:31:03.788880] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.696 6.942: 99.2841% ( 1) 00:18:30.696 12.997 - 13.095: 99.2930% ( 1) 00:18:30.696 3982.572 - 4007.778: 100.0000% ( 79) 00:18:30.696 00:18:30.696 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:30.696 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:30.696 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:30.696 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:30.696 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:30.696 [ 00:18:30.696 { 00:18:30.696 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:30.696 "subtype": "Discovery", 00:18:30.696 "listen_addresses": [], 00:18:30.696 "allow_any_host": true, 00:18:30.696 "hosts": [] 00:18:30.696 }, 00:18:30.696 { 00:18:30.696 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:30.696 "subtype": "NVMe", 00:18:30.696 "listen_addresses": [ 00:18:30.696 { 00:18:30.696 "trtype": "VFIOUSER", 00:18:30.696 "adrfam": "IPv4", 00:18:30.696 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:30.696 "trsvcid": "0" 00:18:30.696 } 00:18:30.696 ], 00:18:30.696 "allow_any_host": true, 00:18:30.696 "hosts": [], 00:18:30.696 "serial_number": "SPDK1", 00:18:30.696 "model_number": "SPDK bdev Controller", 00:18:30.696 "max_namespaces": 32, 00:18:30.696 "min_cntlid": 1, 00:18:30.696 "max_cntlid": 65519, 00:18:30.696 "namespaces": [ 00:18:30.696 { 00:18:30.696 "nsid": 1, 00:18:30.696 "bdev_name": "Malloc1", 00:18:30.696 "name": "Malloc1", 00:18:30.696 "nguid": "AA6902F3D148450F994FC3C19D8ACA1C", 00:18:30.696 "uuid": "aa6902f3-d148-450f-994f-c3c19d8aca1c" 00:18:30.696 }, 00:18:30.696 { 00:18:30.696 "nsid": 2, 00:18:30.696 "bdev_name": "Malloc3", 00:18:30.696 "name": "Malloc3", 00:18:30.696 "nguid": "675AD86D10DE4793BFBD06B184341EF4", 00:18:30.696 "uuid": "675ad86d-10de-4793-bfbd-06b184341ef4" 00:18:30.696 } 00:18:30.696 ] 00:18:30.696 }, 00:18:30.696 { 00:18:30.696 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:30.696 "subtype": "NVMe", 00:18:30.696 "listen_addresses": [ 00:18:30.696 { 00:18:30.696 "trtype": "VFIOUSER", 00:18:30.696 "adrfam": "IPv4", 00:18:30.696 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:30.696 "trsvcid": "0" 00:18:30.697 } 00:18:30.697 ], 00:18:30.697 "allow_any_host": true, 00:18:30.697 "hosts": [], 00:18:30.697 "serial_number": "SPDK2", 00:18:30.697 "model_number": "SPDK bdev Controller", 00:18:30.697 "max_namespaces": 32, 00:18:30.697 "min_cntlid": 1, 00:18:30.697 "max_cntlid": 65519, 00:18:30.697 "namespaces": [ 00:18:30.697 { 00:18:30.697 "nsid": 1, 00:18:30.697 "bdev_name": "Malloc2", 00:18:30.697 "name": "Malloc2", 00:18:30.697 "nguid": "76EFBB01CD474DCE9074FE24CC6D7171", 00:18:30.697 "uuid": "76efbb01-cd47-4dce-9074-fe24cc6d7171" 00:18:30.697 } 00:18:30.697 ] 00:18:30.697 } 00:18:30.697 ] 00:18:30.697 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:30.697 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=414149 00:18:30.697 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:30.697 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:30.697 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:30.697 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:30.697 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:30.697 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:30.697 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:30.697 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:30.697 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.957 [2024-07-25 12:31:04.229378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.957 Malloc4 00:18:30.957 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:31.217 [2024-07-25 12:31:04.438736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:31.217 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:31.217 Asynchronous Event Request test 00:18:31.217 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.217 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.217 Registering asynchronous event callbacks... 00:18:31.217 Starting namespace attribute notice tests for all controllers... 00:18:31.217 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:31.217 aer_cb - Changed Namespace 00:18:31.217 Cleaning up... 00:18:31.477 [ 00:18:31.477 { 00:18:31.477 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:31.477 "subtype": "Discovery", 00:18:31.477 "listen_addresses": [], 00:18:31.477 "allow_any_host": true, 00:18:31.477 "hosts": [] 00:18:31.477 }, 00:18:31.477 { 00:18:31.477 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:31.477 "subtype": "NVMe", 00:18:31.477 "listen_addresses": [ 00:18:31.477 { 00:18:31.477 "trtype": "VFIOUSER", 00:18:31.477 "adrfam": "IPv4", 00:18:31.477 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:31.477 "trsvcid": "0" 00:18:31.477 } 00:18:31.477 ], 00:18:31.477 "allow_any_host": true, 00:18:31.477 "hosts": [], 00:18:31.477 "serial_number": "SPDK1", 00:18:31.477 "model_number": "SPDK bdev Controller", 00:18:31.477 "max_namespaces": 32, 00:18:31.477 "min_cntlid": 1, 00:18:31.477 "max_cntlid": 65519, 00:18:31.477 "namespaces": [ 00:18:31.477 { 00:18:31.477 "nsid": 1, 00:18:31.477 "bdev_name": "Malloc1", 00:18:31.477 "name": "Malloc1", 00:18:31.477 "nguid": "AA6902F3D148450F994FC3C19D8ACA1C", 00:18:31.477 "uuid": "aa6902f3-d148-450f-994f-c3c19d8aca1c" 00:18:31.477 }, 00:18:31.477 { 00:18:31.477 "nsid": 2, 00:18:31.477 "bdev_name": "Malloc3", 00:18:31.477 "name": "Malloc3", 00:18:31.477 "nguid": "675AD86D10DE4793BFBD06B184341EF4", 00:18:31.477 "uuid": "675ad86d-10de-4793-bfbd-06b184341ef4" 00:18:31.477 } 00:18:31.477 ] 00:18:31.477 }, 00:18:31.477 { 00:18:31.477 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:31.477 "subtype": "NVMe", 00:18:31.477 "listen_addresses": [ 00:18:31.477 { 00:18:31.477 "trtype": "VFIOUSER", 00:18:31.477 "adrfam": "IPv4", 00:18:31.477 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:31.477 "trsvcid": "0" 00:18:31.477 } 00:18:31.477 ], 00:18:31.477 "allow_any_host": true, 00:18:31.477 "hosts": [], 00:18:31.477 "serial_number": "SPDK2", 00:18:31.477 "model_number": "SPDK bdev Controller", 00:18:31.477 "max_namespaces": 32, 00:18:31.477 "min_cntlid": 1, 00:18:31.477 "max_cntlid": 65519, 00:18:31.477 "namespaces": [ 00:18:31.477 { 00:18:31.477 "nsid": 1, 00:18:31.477 "bdev_name": "Malloc2", 00:18:31.477 "name": "Malloc2", 00:18:31.477 "nguid": "76EFBB01CD474DCE9074FE24CC6D7171", 00:18:31.477 "uuid": "76efbb01-cd47-4dce-9074-fe24cc6d7171" 00:18:31.477 }, 00:18:31.477 { 00:18:31.477 "nsid": 2, 00:18:31.478 "bdev_name": "Malloc4", 00:18:31.478 "name": "Malloc4", 00:18:31.478 "nguid": "634C5D29FB344E72A32AD3311E401360", 00:18:31.478 "uuid": "634c5d29-fb34-4e72-a32a-d3311e401360" 00:18:31.478 } 00:18:31.478 ] 00:18:31.478 } 00:18:31.478 ] 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 414149 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 406430 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 406430 ']' 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 406430 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 406430 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 406430' 00:18:31.478 killing process with pid 406430 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 406430 00:18:31.478 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 406430 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=414368 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 414368' 00:18:31.739 Process pid: 414368 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 414368 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 414368 ']' 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.739 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:31.739 [2024-07-25 12:31:04.984871] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:31.739 [2024-07-25 12:31:04.985771] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:18:31.739 [2024-07-25 12:31:04.985813] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.739 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.739 [2024-07-25 12:31:05.066738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.739 [2024-07-25 12:31:05.133422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.739 [2024-07-25 12:31:05.133460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.739 [2024-07-25 12:31:05.133471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.739 [2024-07-25 12:31:05.133476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.739 [2024-07-25 12:31:05.133482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.739 [2024-07-25 12:31:05.133597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.739 [2024-07-25 12:31:05.133671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.739 [2024-07-25 12:31:05.133824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.739 [2024-07-25 12:31:05.133825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.000 [2024-07-25 12:31:05.193852] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:32.000 [2024-07-25 12:31:05.194251] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:32.000 [2024-07-25 12:31:05.195118] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:32.000 [2024-07-25 12:31:05.195165] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:32.000 [2024-07-25 12:31:05.195369] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:32.571 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.571 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:18:32.571 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:33.512 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:33.772 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:33.772 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:33.772 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:33.772 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:33.772 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:34.033 Malloc1 00:18:34.033 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:34.293 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:34.293 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:34.554 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:34.554 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:34.554 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:34.815 Malloc2 00:18:34.815 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:35.075 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:35.336 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:35.336 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:35.336 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 414368 00:18:35.336 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 414368 ']' 00:18:35.336 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 414368 00:18:35.336 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:18:35.336 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.336 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 414368 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 414368' 00:18:35.597 killing process with pid 414368 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 414368 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 414368 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:35.597 00:18:35.597 real 0m51.616s 00:18:35.597 user 3m24.530s 00:18:35.597 sys 0m3.456s 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:35.597 ************************************ 00:18:35.597 END TEST nvmf_vfio_user 00:18:35.597 ************************************ 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.597 12:31:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.597 ************************************ 00:18:35.597 START TEST nvmf_vfio_user_nvme_compliance 00:18:35.597 ************************************ 00:18:35.597 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:35.859 * Looking for test storage... 00:18:35.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=415061 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 415061' 00:18:35.859 Process pid: 415061 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 415061 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 415061 ']' 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.859 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:35.859 [2024-07-25 12:31:09.193716] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:18:35.860 [2024-07-25 12:31:09.193771] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.860 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.120 [2024-07-25 12:31:09.280121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:36.120 [2024-07-25 12:31:09.348583] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.120 [2024-07-25 12:31:09.348616] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.120 [2024-07-25 12:31:09.348623] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.120 [2024-07-25 12:31:09.348629] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.120 [2024-07-25 12:31:09.348634] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.120 [2024-07-25 12:31:09.348784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.120 [2024-07-25 12:31:09.348985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.120 [2024-07-25 12:31:09.348984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.690 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.690 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:18:36.690 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.076 malloc0 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.076 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:38.076 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.076 00:18:38.076 00:18:38.076 CUnit - A unit testing framework for C - Version 2.1-3 00:18:38.076 http://cunit.sourceforge.net/ 00:18:38.076 00:18:38.076 00:18:38.076 Suite: nvme_compliance 00:18:38.076 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 12:31:11.316083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.076 [2024-07-25 12:31:11.317518] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:38.076 [2024-07-25 12:31:11.317537] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:38.076 [2024-07-25 12:31:11.317545] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:38.076 [2024-07-25 12:31:11.319111] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.076 passed 00:18:38.076 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 12:31:11.410023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.076 [2024-07-25 12:31:11.413045] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.076 passed 00:18:38.336 Test: admin_identify_ns ...[2024-07-25 12:31:11.504566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.336 [2024-07-25 12:31:11.566562] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:38.336 [2024-07-25 12:31:11.574564] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:38.336 [2024-07-25 12:31:11.595664] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.336 passed 00:18:38.336 Test: admin_get_features_mandatory_features ...[2024-07-25 12:31:11.684067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.336 [2024-07-25 12:31:11.687084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.336 passed 00:18:38.596 Test: admin_get_features_optional_features ...[2024-07-25 12:31:11.775930] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.596 [2024-07-25 12:31:11.778972] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.596 passed 00:18:38.596 Test: admin_set_features_number_of_queues ...[2024-07-25 12:31:11.868552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.596 [2024-07-25 12:31:11.977638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.596 passed 00:18:38.856 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 12:31:12.064250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.856 [2024-07-25 12:31:12.067280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.856 passed 00:18:38.856 Test: admin_get_log_page_with_lpo ...[2024-07-25 12:31:12.156075] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.856 [2024-07-25 12:31:12.223557] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:38.856 [2024-07-25 12:31:12.236616] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.856 passed 00:18:39.117 Test: fabric_property_get ...[2024-07-25 12:31:12.327629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.117 [2024-07-25 12:31:12.328920] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:39.117 [2024-07-25 12:31:12.330656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.117 passed 00:18:39.117 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 12:31:12.422892] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.117 [2024-07-25 12:31:12.424376] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:39.117 [2024-07-25 12:31:12.425950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.117 passed 00:18:39.117 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 12:31:12.514109] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.377 [2024-07-25 12:31:12.597553] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:39.377 [2024-07-25 12:31:12.613560] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:39.377 [2024-07-25 12:31:12.618647] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.377 passed 00:18:39.377 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 12:31:12.706244] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.377 [2024-07-25 12:31:12.707710] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:39.377 [2024-07-25 12:31:12.709281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.377 passed 00:18:39.637 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 12:31:12.799839] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.637 [2024-07-25 12:31:12.876562] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:39.637 [2024-07-25 12:31:12.900558] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:39.637 [2024-07-25 12:31:12.905634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.637 passed 00:18:39.637 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 12:31:12.993236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.637 [2024-07-25 12:31:12.994689] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:39.637 [2024-07-25 12:31:12.994735] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:39.637 [2024-07-25 12:31:12.996284] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.637 passed 00:18:39.897 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 12:31:13.083749] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.897 [2024-07-25 12:31:13.178556] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:39.897 [2024-07-25 12:31:13.186554] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:39.897 [2024-07-25 12:31:13.194555] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:39.898 [2024-07-25 12:31:13.202555] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:39.898 [2024-07-25 12:31:13.231638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.898 passed 00:18:40.157 Test: admin_create_io_sq_verify_pc ...[2024-07-25 12:31:13.318201] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.157 [2024-07-25 12:31:13.333567] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:40.157 [2024-07-25 12:31:13.351467] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.157 passed 00:18:40.157 Test: admin_create_io_qp_max_qps ...[2024-07-25 12:31:13.443343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.540 [2024-07-25 12:31:14.549557] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:41.540 [2024-07-25 12:31:14.934801] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.800 passed 00:18:41.800 Test: admin_create_io_sq_shared_cq ...[2024-07-25 12:31:15.023114] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.800 [2024-07-25 12:31:15.154554] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:41.800 [2024-07-25 12:31:15.190615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:42.060 passed 00:18:42.060 00:18:42.060 Run Summary: Type Total Ran Passed Failed Inactive 00:18:42.060 suites 1 1 n/a 0 0 00:18:42.060 tests 18 18 18 0 0 00:18:42.060 asserts 360 360 360 0 n/a 00:18:42.060 00:18:42.060 Elapsed time = 1.615 seconds 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 415061 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 415061 ']' 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 415061 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 415061 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 415061' 00:18:42.060 killing process with pid 415061 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 415061 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 415061 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:42.060 00:18:42.060 real 0m6.440s 00:18:42.060 user 0m18.434s 00:18:42.060 sys 0m0.503s 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:42.060 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:42.060 ************************************ 00:18:42.060 END TEST nvmf_vfio_user_nvme_compliance 00:18:42.060 ************************************ 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:42.321 ************************************ 00:18:42.321 START TEST nvmf_vfio_user_fuzz 00:18:42.321 ************************************ 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:42.321 * Looking for test storage... 00:18:42.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:42.321 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=416326 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 416326' 00:18:42.322 Process pid: 416326 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 416326 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 416326 ']' 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.322 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.261 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.261 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:18:43.261 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.202 malloc0 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.202 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.463 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.463 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:44.463 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:16.575 Fuzzing completed. Shutting down the fuzz application 00:19:16.575 00:19:16.575 Dumping successful admin opcodes: 00:19:16.575 8, 9, 10, 24, 00:19:16.575 Dumping successful io opcodes: 00:19:16.575 0, 00:19:16.575 NS: 0x200003a1ef00 I/O qp, Total commands completed: 677712, total successful commands: 2639, random_seed: 1218811456 00:19:16.575 NS: 0x200003a1ef00 admin qp, Total commands completed: 167174, total successful commands: 1363, random_seed: 758950656 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 416326 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 416326 ']' 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 416326 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 416326 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 416326' 00:19:16.575 killing process with pid 416326 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 416326 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 416326 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:16.575 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:16.575 00:19:16.575 real 0m32.841s 00:19:16.576 user 0m39.799s 00:19:16.576 sys 0m21.167s 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:16.576 ************************************ 00:19:16.576 END TEST nvmf_vfio_user_fuzz 00:19:16.576 ************************************ 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.576 ************************************ 00:19:16.576 START TEST nvmf_auth_target 00:19:16.576 ************************************ 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:16.576 * Looking for test storage... 00:19:16.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:16.576 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:23.217 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:23.217 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:23.217 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:23.217 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.217 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.218 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.479 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.479 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.479 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:23.479 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.479 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.479 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.479 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:23.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:19:23.479 00:19:23.479 --- 10.0.0.2 ping statistics --- 00:19:23.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.479 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:19:23.479 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:19:23.740 00:19:23.740 --- 10.0.0.1 ping statistics --- 00:19:23.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.740 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:19:23.740 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.740 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:23.740 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:23.740 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.740 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:23.740 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=425650 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 425650 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 425650 ']' 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.741 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=425699 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aedf22b8ab5682c572a89fa84cd5ce7fad574de203c20d81 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.WSU 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aedf22b8ab5682c572a89fa84cd5ce7fad574de203c20d81 0 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aedf22b8ab5682c572a89fa84cd5ce7fad574de203c20d81 0 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aedf22b8ab5682c572a89fa84cd5ce7fad574de203c20d81 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.WSU 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.WSU 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.WSU 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ba06b667521f6960e6a1ad33940c86e07eee88532c660ea9ead5b59a8cb44b2f 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZL3 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ba06b667521f6960e6a1ad33940c86e07eee88532c660ea9ead5b59a8cb44b2f 3 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ba06b667521f6960e6a1ad33940c86e07eee88532c660ea9ead5b59a8cb44b2f 3 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ba06b667521f6960e6a1ad33940c86e07eee88532c660ea9ead5b59a8cb44b2f 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:24.685 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZL3 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZL3 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ZL3 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6369c399856212bce5596ae5b1af46d8 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.KF8 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6369c399856212bce5596ae5b1af46d8 1 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6369c399856212bce5596ae5b1af46d8 1 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6369c399856212bce5596ae5b1af46d8 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:24.685 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.KF8 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.KF8 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.KF8 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3384c163b30e83b4ecc5abeeac34d63b38e19654f6da24ea 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IrM 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3384c163b30e83b4ecc5abeeac34d63b38e19654f6da24ea 2 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3384c163b30e83b4ecc5abeeac34d63b38e19654f6da24ea 2 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.947 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3384c163b30e83b4ecc5abeeac34d63b38e19654f6da24ea 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IrM 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IrM 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.IrM 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2247b5a823f7d0f008555fc22ea953572497dee8741ac10c 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.dxp 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2247b5a823f7d0f008555fc22ea953572497dee8741ac10c 2 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2247b5a823f7d0f008555fc22ea953572497dee8741ac10c 2 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2247b5a823f7d0f008555fc22ea953572497dee8741ac10c 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.dxp 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.dxp 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.dxp 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8eb110201778df4a9f44e2d54cc0172c 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jzI 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8eb110201778df4a9f44e2d54cc0172c 1 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8eb110201778df4a9f44e2d54cc0172c 1 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8eb110201778df4a9f44e2d54cc0172c 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jzI 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jzI 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.jzI 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5b03c4f2eb422b87865a23ce9bc961b0bbadae231b33cf10486f40772fef23cb 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.r81 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5b03c4f2eb422b87865a23ce9bc961b0bbadae231b33cf10486f40772fef23cb 3 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5b03c4f2eb422b87865a23ce9bc961b0bbadae231b33cf10486f40772fef23cb 3 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5b03c4f2eb422b87865a23ce9bc961b0bbadae231b33cf10486f40772fef23cb 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:24.948 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:25.210 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.r81 00:19:25.210 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.r81 00:19:25.210 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.r81 00:19:25.210 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:25.210 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 425650 00:19:25.210 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 425650 ']' 00:19:25.210 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.210 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.210 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.210 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.210 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 425699 /var/tmp/host.sock 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 425699 ']' 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:25.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.471 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.732 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.732 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:25.732 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.WSU 00:19:25.732 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.732 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.732 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.732 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.WSU 00:19:25.732 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.WSU 00:19:25.732 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ZL3 ]] 00:19:25.732 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZL3 00:19:25.732 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.732 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.732 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.732 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZL3 00:19:25.732 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZL3 00:19:25.993 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:25.993 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.KF8 00:19:25.993 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.993 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.993 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.993 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.KF8 00:19:25.993 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.KF8 00:19:26.253 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.IrM ]] 00:19:26.253 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IrM 00:19:26.253 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.253 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.253 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.253 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IrM 00:19:26.253 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IrM 00:19:26.513 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:26.513 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.dxp 00:19:26.513 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.513 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.513 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.513 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.dxp 00:19:26.513 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.dxp 00:19:26.772 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.jzI ]] 00:19:26.772 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jzI 00:19:26.772 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.772 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.772 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.772 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jzI 00:19:26.772 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jzI 00:19:27.033 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:27.033 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.r81 00:19:27.033 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.033 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.033 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.033 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.r81 00:19:27.033 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.r81 00:19:27.292 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:27.292 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:27.292 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.292 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.292 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.292 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.552 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.812 00:19:27.812 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.812 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.812 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.073 { 00:19:28.073 "cntlid": 1, 00:19:28.073 "qid": 0, 00:19:28.073 "state": "enabled", 00:19:28.073 "thread": "nvmf_tgt_poll_group_000", 00:19:28.073 "listen_address": { 00:19:28.073 "trtype": "TCP", 00:19:28.073 "adrfam": "IPv4", 00:19:28.073 "traddr": "10.0.0.2", 00:19:28.073 "trsvcid": "4420" 00:19:28.073 }, 00:19:28.073 "peer_address": { 00:19:28.073 "trtype": "TCP", 00:19:28.073 "adrfam": "IPv4", 00:19:28.073 "traddr": "10.0.0.1", 00:19:28.073 "trsvcid": "45834" 00:19:28.073 }, 00:19:28.073 "auth": { 00:19:28.073 "state": "completed", 00:19:28.073 "digest": "sha256", 00:19:28.073 "dhgroup": "null" 00:19:28.073 } 00:19:28.073 } 00:19:28.073 ]' 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.073 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.333 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:19:29.272 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.272 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:29.272 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.272 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.272 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.272 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.272 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.272 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.532 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:29.532 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.532 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.532 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:29.532 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.532 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.532 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.532 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.532 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.532 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.532 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.533 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.792 00:19:29.792 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.792 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.792 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.053 { 00:19:30.053 "cntlid": 3, 00:19:30.053 "qid": 0, 00:19:30.053 "state": "enabled", 00:19:30.053 "thread": "nvmf_tgt_poll_group_000", 00:19:30.053 "listen_address": { 00:19:30.053 "trtype": "TCP", 00:19:30.053 "adrfam": "IPv4", 00:19:30.053 "traddr": "10.0.0.2", 00:19:30.053 "trsvcid": "4420" 00:19:30.053 }, 00:19:30.053 "peer_address": { 00:19:30.053 "trtype": "TCP", 00:19:30.053 "adrfam": "IPv4", 00:19:30.053 "traddr": "10.0.0.1", 00:19:30.053 "trsvcid": "41950" 00:19:30.053 }, 00:19:30.053 "auth": { 00:19:30.053 "state": "completed", 00:19:30.053 "digest": "sha256", 00:19:30.053 "dhgroup": "null" 00:19:30.053 } 00:19:30.053 } 00:19:30.053 ]' 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.053 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.314 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:19:30.885 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.885 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:30.885 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.885 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.885 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.885 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.885 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.885 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.145 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.405 00:19:31.405 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.405 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.405 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.665 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.665 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.665 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.665 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.665 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.665 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.665 { 00:19:31.665 "cntlid": 5, 00:19:31.665 "qid": 0, 00:19:31.665 "state": "enabled", 00:19:31.665 "thread": "nvmf_tgt_poll_group_000", 00:19:31.665 "listen_address": { 00:19:31.665 "trtype": "TCP", 00:19:31.665 "adrfam": "IPv4", 00:19:31.665 "traddr": "10.0.0.2", 00:19:31.665 "trsvcid": "4420" 00:19:31.665 }, 00:19:31.665 "peer_address": { 00:19:31.665 "trtype": "TCP", 00:19:31.665 "adrfam": "IPv4", 00:19:31.665 "traddr": "10.0.0.1", 00:19:31.665 "trsvcid": "41988" 00:19:31.665 }, 00:19:31.665 "auth": { 00:19:31.665 "state": "completed", 00:19:31.665 "digest": "sha256", 00:19:31.665 "dhgroup": "null" 00:19:31.665 } 00:19:31.665 } 00:19:31.665 ]' 00:19:31.665 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.665 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.665 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.665 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:31.665 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.925 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.925 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.925 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.925 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:19:32.866 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.866 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.127 00:19:33.127 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.127 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.127 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.386 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.386 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.386 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.386 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.386 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.386 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.386 { 00:19:33.386 "cntlid": 7, 00:19:33.386 "qid": 0, 00:19:33.386 "state": "enabled", 00:19:33.386 "thread": "nvmf_tgt_poll_group_000", 00:19:33.386 "listen_address": { 00:19:33.386 "trtype": "TCP", 00:19:33.386 "adrfam": "IPv4", 00:19:33.386 "traddr": "10.0.0.2", 00:19:33.386 "trsvcid": "4420" 00:19:33.386 }, 00:19:33.386 "peer_address": { 00:19:33.386 "trtype": "TCP", 00:19:33.387 "adrfam": "IPv4", 00:19:33.387 "traddr": "10.0.0.1", 00:19:33.387 "trsvcid": "42006" 00:19:33.387 }, 00:19:33.387 "auth": { 00:19:33.387 "state": "completed", 00:19:33.387 "digest": "sha256", 00:19:33.387 "dhgroup": "null" 00:19:33.387 } 00:19:33.387 } 00:19:33.387 ]' 00:19:33.387 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.387 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.387 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.647 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:33.647 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.647 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.647 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.647 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.907 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:19:34.477 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.477 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:34.477 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.477 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.477 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.477 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.477 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.477 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.477 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.737 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.998 00:19:34.998 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.998 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.998 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.258 { 00:19:35.258 "cntlid": 9, 00:19:35.258 "qid": 0, 00:19:35.258 "state": "enabled", 00:19:35.258 "thread": "nvmf_tgt_poll_group_000", 00:19:35.258 "listen_address": { 00:19:35.258 "trtype": "TCP", 00:19:35.258 "adrfam": "IPv4", 00:19:35.258 "traddr": "10.0.0.2", 00:19:35.258 "trsvcid": "4420" 00:19:35.258 }, 00:19:35.258 "peer_address": { 00:19:35.258 "trtype": "TCP", 00:19:35.258 "adrfam": "IPv4", 00:19:35.258 "traddr": "10.0.0.1", 00:19:35.258 "trsvcid": "42040" 00:19:35.258 }, 00:19:35.258 "auth": { 00:19:35.258 "state": "completed", 00:19:35.258 "digest": "sha256", 00:19:35.258 "dhgroup": "ffdhe2048" 00:19:35.258 } 00:19:35.258 } 00:19:35.258 ]' 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.258 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.518 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:19:36.089 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.089 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:36.089 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.089 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.089 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.089 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.089 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.089 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.350 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:36.350 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.350 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.350 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:36.350 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:36.350 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.351 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.351 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.351 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.351 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.351 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.351 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.612 00:19:36.612 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.612 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.612 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.872 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.872 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.872 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.872 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.872 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.872 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.872 { 00:19:36.872 "cntlid": 11, 00:19:36.872 "qid": 0, 00:19:36.872 "state": "enabled", 00:19:36.872 "thread": "nvmf_tgt_poll_group_000", 00:19:36.872 "listen_address": { 00:19:36.872 "trtype": "TCP", 00:19:36.872 "adrfam": "IPv4", 00:19:36.872 "traddr": "10.0.0.2", 00:19:36.872 "trsvcid": "4420" 00:19:36.872 }, 00:19:36.872 "peer_address": { 00:19:36.872 "trtype": "TCP", 00:19:36.872 "adrfam": "IPv4", 00:19:36.872 "traddr": "10.0.0.1", 00:19:36.872 "trsvcid": "42068" 00:19:36.872 }, 00:19:36.872 "auth": { 00:19:36.872 "state": "completed", 00:19:36.872 "digest": "sha256", 00:19:36.872 "dhgroup": "ffdhe2048" 00:19:36.872 } 00:19:36.872 } 00:19:36.872 ]' 00:19:36.872 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.872 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.872 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.872 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:36.872 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.132 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.132 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.132 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.132 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.073 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.332 00:19:38.332 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.332 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.332 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.592 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.592 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.592 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.592 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.592 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.592 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.592 { 00:19:38.592 "cntlid": 13, 00:19:38.592 "qid": 0, 00:19:38.592 "state": "enabled", 00:19:38.592 "thread": "nvmf_tgt_poll_group_000", 00:19:38.592 "listen_address": { 00:19:38.592 "trtype": "TCP", 00:19:38.592 "adrfam": "IPv4", 00:19:38.592 "traddr": "10.0.0.2", 00:19:38.592 "trsvcid": "4420" 00:19:38.592 }, 00:19:38.592 "peer_address": { 00:19:38.592 "trtype": "TCP", 00:19:38.592 "adrfam": "IPv4", 00:19:38.592 "traddr": "10.0.0.1", 00:19:38.592 "trsvcid": "42086" 00:19:38.592 }, 00:19:38.592 "auth": { 00:19:38.592 "state": "completed", 00:19:38.592 "digest": "sha256", 00:19:38.592 "dhgroup": "ffdhe2048" 00:19:38.592 } 00:19:38.592 } 00:19:38.592 ]' 00:19:38.592 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.592 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.592 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.592 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.592 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.852 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.852 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.852 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.852 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:19:39.792 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.792 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:39.792 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.792 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.792 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.792 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.792 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.792 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.792 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.053 00:19:40.053 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.053 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.053 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.314 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.314 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.314 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.314 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.314 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.314 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.314 { 00:19:40.314 "cntlid": 15, 00:19:40.314 "qid": 0, 00:19:40.314 "state": "enabled", 00:19:40.314 "thread": "nvmf_tgt_poll_group_000", 00:19:40.314 "listen_address": { 00:19:40.314 "trtype": "TCP", 00:19:40.314 "adrfam": "IPv4", 00:19:40.314 "traddr": "10.0.0.2", 00:19:40.314 "trsvcid": "4420" 00:19:40.314 }, 00:19:40.314 "peer_address": { 00:19:40.314 "trtype": "TCP", 00:19:40.314 "adrfam": "IPv4", 00:19:40.314 "traddr": "10.0.0.1", 00:19:40.314 "trsvcid": "41188" 00:19:40.314 }, 00:19:40.314 "auth": { 00:19:40.314 "state": "completed", 00:19:40.314 "digest": "sha256", 00:19:40.314 "dhgroup": "ffdhe2048" 00:19:40.314 } 00:19:40.314 } 00:19:40.314 ]' 00:19:40.314 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.314 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.314 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.314 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.314 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.574 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.574 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.574 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.574 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.513 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.774 00:19:41.774 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.774 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.774 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.034 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.034 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.034 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.034 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.034 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.034 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.034 { 00:19:42.034 "cntlid": 17, 00:19:42.034 "qid": 0, 00:19:42.034 "state": "enabled", 00:19:42.034 "thread": "nvmf_tgt_poll_group_000", 00:19:42.034 "listen_address": { 00:19:42.034 "trtype": "TCP", 00:19:42.034 "adrfam": "IPv4", 00:19:42.034 "traddr": "10.0.0.2", 00:19:42.034 "trsvcid": "4420" 00:19:42.034 }, 00:19:42.034 "peer_address": { 00:19:42.034 "trtype": "TCP", 00:19:42.034 "adrfam": "IPv4", 00:19:42.034 "traddr": "10.0.0.1", 00:19:42.034 "trsvcid": "41216" 00:19:42.034 }, 00:19:42.034 "auth": { 00:19:42.034 "state": "completed", 00:19:42.034 "digest": "sha256", 00:19:42.034 "dhgroup": "ffdhe3072" 00:19:42.034 } 00:19:42.034 } 00:19:42.034 ]' 00:19:42.034 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.034 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.034 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.034 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.034 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.294 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.294 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.294 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.294 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.234 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.493 00:19:43.493 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.493 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.493 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.754 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.754 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.754 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.754 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.754 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.754 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.754 { 00:19:43.754 "cntlid": 19, 00:19:43.754 "qid": 0, 00:19:43.754 "state": "enabled", 00:19:43.754 "thread": "nvmf_tgt_poll_group_000", 00:19:43.754 "listen_address": { 00:19:43.754 "trtype": "TCP", 00:19:43.754 "adrfam": "IPv4", 00:19:43.754 "traddr": "10.0.0.2", 00:19:43.754 "trsvcid": "4420" 00:19:43.754 }, 00:19:43.754 "peer_address": { 00:19:43.754 "trtype": "TCP", 00:19:43.754 "adrfam": "IPv4", 00:19:43.754 "traddr": "10.0.0.1", 00:19:43.754 "trsvcid": "41242" 00:19:43.754 }, 00:19:43.754 "auth": { 00:19:43.754 "state": "completed", 00:19:43.754 "digest": "sha256", 00:19:43.754 "dhgroup": "ffdhe3072" 00:19:43.754 } 00:19:43.754 } 00:19:43.754 ]' 00:19:43.754 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.754 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.754 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.013 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.013 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.013 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.013 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.013 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.281 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:19:45.219 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.219 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:45.219 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.219 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.479 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.739 00:19:45.739 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.739 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.739 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.998 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.998 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.998 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.998 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.998 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.998 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.998 { 00:19:45.998 "cntlid": 21, 00:19:45.998 "qid": 0, 00:19:45.998 "state": "enabled", 00:19:45.998 "thread": "nvmf_tgt_poll_group_000", 00:19:45.998 "listen_address": { 00:19:45.998 "trtype": "TCP", 00:19:45.998 "adrfam": "IPv4", 00:19:45.998 "traddr": "10.0.0.2", 00:19:45.998 "trsvcid": "4420" 00:19:45.998 }, 00:19:45.998 "peer_address": { 00:19:45.998 "trtype": "TCP", 00:19:45.998 "adrfam": "IPv4", 00:19:45.998 "traddr": "10.0.0.1", 00:19:45.998 "trsvcid": "41256" 00:19:45.998 }, 00:19:45.998 "auth": { 00:19:45.998 "state": "completed", 00:19:45.998 "digest": "sha256", 00:19:45.998 "dhgroup": "ffdhe3072" 00:19:45.998 } 00:19:45.998 } 00:19:45.998 ]' 00:19:45.998 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.998 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.998 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.278 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.278 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.278 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.278 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.278 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.278 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:19:47.666 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.666 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:47.666 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.666 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.666 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.666 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.666 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.666 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.666 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.236 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.236 { 00:19:48.236 "cntlid": 23, 00:19:48.236 "qid": 0, 00:19:48.236 "state": "enabled", 00:19:48.236 "thread": "nvmf_tgt_poll_group_000", 00:19:48.236 "listen_address": { 00:19:48.236 "trtype": "TCP", 00:19:48.236 "adrfam": "IPv4", 00:19:48.236 "traddr": "10.0.0.2", 00:19:48.236 "trsvcid": "4420" 00:19:48.236 }, 00:19:48.236 "peer_address": { 00:19:48.236 "trtype": "TCP", 00:19:48.236 "adrfam": "IPv4", 00:19:48.236 "traddr": "10.0.0.1", 00:19:48.236 "trsvcid": "41286" 00:19:48.236 }, 00:19:48.236 "auth": { 00:19:48.236 "state": "completed", 00:19:48.236 "digest": "sha256", 00:19:48.236 "dhgroup": "ffdhe3072" 00:19:48.236 } 00:19:48.236 } 00:19:48.236 ]' 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.236 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.495 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:48.495 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.495 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.495 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.495 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.755 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:19:49.323 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.323 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:49.323 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.323 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.323 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.323 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.323 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.323 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.323 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.583 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.842 00:19:49.842 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.842 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.842 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.102 { 00:19:50.102 "cntlid": 25, 00:19:50.102 "qid": 0, 00:19:50.102 "state": "enabled", 00:19:50.102 "thread": "nvmf_tgt_poll_group_000", 00:19:50.102 "listen_address": { 00:19:50.102 "trtype": "TCP", 00:19:50.102 "adrfam": "IPv4", 00:19:50.102 "traddr": "10.0.0.2", 00:19:50.102 "trsvcid": "4420" 00:19:50.102 }, 00:19:50.102 "peer_address": { 00:19:50.102 "trtype": "TCP", 00:19:50.102 "adrfam": "IPv4", 00:19:50.102 "traddr": "10.0.0.1", 00:19:50.102 "trsvcid": "60658" 00:19:50.102 }, 00:19:50.102 "auth": { 00:19:50.102 "state": "completed", 00:19:50.102 "digest": "sha256", 00:19:50.102 "dhgroup": "ffdhe4096" 00:19:50.102 } 00:19:50.102 } 00:19:50.102 ]' 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.102 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.362 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:19:51.742 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.742 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:51.742 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.742 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.742 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.742 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.742 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.742 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.742 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.002 00:19:52.002 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.002 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.002 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.261 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.261 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.261 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.261 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.261 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.261 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.261 { 00:19:52.261 "cntlid": 27, 00:19:52.261 "qid": 0, 00:19:52.261 "state": "enabled", 00:19:52.261 "thread": "nvmf_tgt_poll_group_000", 00:19:52.261 "listen_address": { 00:19:52.261 "trtype": "TCP", 00:19:52.261 "adrfam": "IPv4", 00:19:52.261 "traddr": "10.0.0.2", 00:19:52.261 "trsvcid": "4420" 00:19:52.261 }, 00:19:52.261 "peer_address": { 00:19:52.261 "trtype": "TCP", 00:19:52.261 "adrfam": "IPv4", 00:19:52.261 "traddr": "10.0.0.1", 00:19:52.261 "trsvcid": "60686" 00:19:52.261 }, 00:19:52.261 "auth": { 00:19:52.261 "state": "completed", 00:19:52.261 "digest": "sha256", 00:19:52.261 "dhgroup": "ffdhe4096" 00:19:52.261 } 00:19:52.261 } 00:19:52.261 ]' 00:19:52.261 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.261 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.261 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.522 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.522 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.522 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.522 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.522 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.522 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:19:53.904 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.904 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:53.904 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.904 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.904 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.904 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.904 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:53.904 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.164 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.428 00:19:54.428 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.428 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.428 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.689 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.689 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.689 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.689 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.689 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.689 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.689 { 00:19:54.689 "cntlid": 29, 00:19:54.689 "qid": 0, 00:19:54.689 "state": "enabled", 00:19:54.689 "thread": "nvmf_tgt_poll_group_000", 00:19:54.689 "listen_address": { 00:19:54.689 "trtype": "TCP", 00:19:54.690 "adrfam": "IPv4", 00:19:54.690 "traddr": "10.0.0.2", 00:19:54.690 "trsvcid": "4420" 00:19:54.690 }, 00:19:54.690 "peer_address": { 00:19:54.690 "trtype": "TCP", 00:19:54.690 "adrfam": "IPv4", 00:19:54.690 "traddr": "10.0.0.1", 00:19:54.690 "trsvcid": "60730" 00:19:54.690 }, 00:19:54.690 "auth": { 00:19:54.690 "state": "completed", 00:19:54.690 "digest": "sha256", 00:19:54.690 "dhgroup": "ffdhe4096" 00:19:54.690 } 00:19:54.690 } 00:19:54.690 ]' 00:19:54.690 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.690 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.690 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.690 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.690 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.690 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.690 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.690 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.950 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:19:55.520 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.520 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:55.520 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.520 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.520 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.520 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.520 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.520 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.780 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.040 00:19:56.040 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.040 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.040 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.300 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.300 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.300 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.300 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.300 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.300 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.300 { 00:19:56.300 "cntlid": 31, 00:19:56.300 "qid": 0, 00:19:56.300 "state": "enabled", 00:19:56.300 "thread": "nvmf_tgt_poll_group_000", 00:19:56.300 "listen_address": { 00:19:56.300 "trtype": "TCP", 00:19:56.300 "adrfam": "IPv4", 00:19:56.300 "traddr": "10.0.0.2", 00:19:56.300 "trsvcid": "4420" 00:19:56.300 }, 00:19:56.300 "peer_address": { 00:19:56.300 "trtype": "TCP", 00:19:56.300 "adrfam": "IPv4", 00:19:56.300 "traddr": "10.0.0.1", 00:19:56.300 "trsvcid": "60766" 00:19:56.300 }, 00:19:56.300 "auth": { 00:19:56.300 "state": "completed", 00:19:56.300 "digest": "sha256", 00:19:56.300 "dhgroup": "ffdhe4096" 00:19:56.300 } 00:19:56.300 } 00:19:56.300 ]' 00:19:56.300 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.300 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.300 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.300 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.300 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.560 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.560 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.560 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.821 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:19:57.391 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.391 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:57.391 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.391 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.391 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.391 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.391 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.391 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.391 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.651 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.910 00:19:58.170 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.170 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.170 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.170 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.170 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.170 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.170 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.170 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.170 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.170 { 00:19:58.170 "cntlid": 33, 00:19:58.170 "qid": 0, 00:19:58.170 "state": "enabled", 00:19:58.170 "thread": "nvmf_tgt_poll_group_000", 00:19:58.170 "listen_address": { 00:19:58.170 "trtype": "TCP", 00:19:58.170 "adrfam": "IPv4", 00:19:58.170 "traddr": "10.0.0.2", 00:19:58.170 "trsvcid": "4420" 00:19:58.170 }, 00:19:58.170 "peer_address": { 00:19:58.170 "trtype": "TCP", 00:19:58.170 "adrfam": "IPv4", 00:19:58.170 "traddr": "10.0.0.1", 00:19:58.170 "trsvcid": "60802" 00:19:58.170 }, 00:19:58.170 "auth": { 00:19:58.170 "state": "completed", 00:19:58.170 "digest": "sha256", 00:19:58.170 "dhgroup": "ffdhe6144" 00:19:58.170 } 00:19:58.170 } 00:19:58.170 ]' 00:19:58.170 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.430 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.430 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.430 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.430 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.430 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.430 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.430 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.691 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.075 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.335 00:20:00.595 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.595 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.595 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.595 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.595 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.595 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.595 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.595 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.595 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.595 { 00:20:00.595 "cntlid": 35, 00:20:00.595 "qid": 0, 00:20:00.595 "state": "enabled", 00:20:00.595 "thread": "nvmf_tgt_poll_group_000", 00:20:00.595 "listen_address": { 00:20:00.595 "trtype": "TCP", 00:20:00.595 "adrfam": "IPv4", 00:20:00.595 "traddr": "10.0.0.2", 00:20:00.595 "trsvcid": "4420" 00:20:00.595 }, 00:20:00.595 "peer_address": { 00:20:00.595 "trtype": "TCP", 00:20:00.595 "adrfam": "IPv4", 00:20:00.596 "traddr": "10.0.0.1", 00:20:00.596 "trsvcid": "52466" 00:20:00.596 }, 00:20:00.596 "auth": { 00:20:00.596 "state": "completed", 00:20:00.596 "digest": "sha256", 00:20:00.596 "dhgroup": "ffdhe6144" 00:20:00.596 } 00:20:00.596 } 00:20:00.596 ]' 00:20:00.596 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.855 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.855 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.855 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.855 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.855 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.855 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.856 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.116 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:20:01.685 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.685 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:01.685 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.685 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.685 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.685 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.685 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.685 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.944 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.204 00:20:02.464 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.464 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.464 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.464 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.464 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.464 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.464 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.464 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.464 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.464 { 00:20:02.464 "cntlid": 37, 00:20:02.464 "qid": 0, 00:20:02.464 "state": "enabled", 00:20:02.464 "thread": "nvmf_tgt_poll_group_000", 00:20:02.464 "listen_address": { 00:20:02.464 "trtype": "TCP", 00:20:02.464 "adrfam": "IPv4", 00:20:02.464 "traddr": "10.0.0.2", 00:20:02.464 "trsvcid": "4420" 00:20:02.464 }, 00:20:02.464 "peer_address": { 00:20:02.464 "trtype": "TCP", 00:20:02.464 "adrfam": "IPv4", 00:20:02.464 "traddr": "10.0.0.1", 00:20:02.464 "trsvcid": "52486" 00:20:02.464 }, 00:20:02.464 "auth": { 00:20:02.464 "state": "completed", 00:20:02.464 "digest": "sha256", 00:20:02.464 "dhgroup": "ffdhe6144" 00:20:02.464 } 00:20:02.464 } 00:20:02.464 ]' 00:20:02.464 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.724 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.724 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.724 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.724 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.724 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.724 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.724 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.983 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:20:03.554 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.554 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:03.554 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.554 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.554 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.554 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.554 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.554 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.814 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.384 00:20:04.384 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.384 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.384 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.384 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.384 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.384 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.384 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.384 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.384 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.384 { 00:20:04.384 "cntlid": 39, 00:20:04.384 "qid": 0, 00:20:04.384 "state": "enabled", 00:20:04.384 "thread": "nvmf_tgt_poll_group_000", 00:20:04.384 "listen_address": { 00:20:04.384 "trtype": "TCP", 00:20:04.384 "adrfam": "IPv4", 00:20:04.384 "traddr": "10.0.0.2", 00:20:04.384 "trsvcid": "4420" 00:20:04.384 }, 00:20:04.384 "peer_address": { 00:20:04.384 "trtype": "TCP", 00:20:04.384 "adrfam": "IPv4", 00:20:04.384 "traddr": "10.0.0.1", 00:20:04.384 "trsvcid": "52506" 00:20:04.384 }, 00:20:04.384 "auth": { 00:20:04.384 "state": "completed", 00:20:04.384 "digest": "sha256", 00:20:04.384 "dhgroup": "ffdhe6144" 00:20:04.384 } 00:20:04.384 } 00:20:04.384 ]' 00:20:04.384 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.384 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.645 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.645 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.645 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.645 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.645 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.645 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.905 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:20:05.475 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.475 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:05.475 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.475 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.475 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.475 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.475 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.475 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.475 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.735 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.304 00:20:06.304 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.304 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.304 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.563 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.563 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.563 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.563 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.563 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.563 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.563 { 00:20:06.563 "cntlid": 41, 00:20:06.563 "qid": 0, 00:20:06.563 "state": "enabled", 00:20:06.563 "thread": "nvmf_tgt_poll_group_000", 00:20:06.563 "listen_address": { 00:20:06.563 "trtype": "TCP", 00:20:06.563 "adrfam": "IPv4", 00:20:06.563 "traddr": "10.0.0.2", 00:20:06.563 "trsvcid": "4420" 00:20:06.563 }, 00:20:06.563 "peer_address": { 00:20:06.563 "trtype": "TCP", 00:20:06.563 "adrfam": "IPv4", 00:20:06.563 "traddr": "10.0.0.1", 00:20:06.563 "trsvcid": "52524" 00:20:06.563 }, 00:20:06.563 "auth": { 00:20:06.563 "state": "completed", 00:20:06.563 "digest": "sha256", 00:20:06.563 "dhgroup": "ffdhe8192" 00:20:06.563 } 00:20:06.563 } 00:20:06.563 ]' 00:20:06.563 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.563 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.563 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.823 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:06.823 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:20:07.762 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.762 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:07.762 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.762 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.762 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.762 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.762 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.762 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.762 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.332 00:20:08.332 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.332 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.332 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.592 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.592 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.592 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.592 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.592 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.592 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.592 { 00:20:08.592 "cntlid": 43, 00:20:08.592 "qid": 0, 00:20:08.592 "state": "enabled", 00:20:08.592 "thread": "nvmf_tgt_poll_group_000", 00:20:08.592 "listen_address": { 00:20:08.592 "trtype": "TCP", 00:20:08.592 "adrfam": "IPv4", 00:20:08.592 "traddr": "10.0.0.2", 00:20:08.592 "trsvcid": "4420" 00:20:08.592 }, 00:20:08.592 "peer_address": { 00:20:08.592 "trtype": "TCP", 00:20:08.592 "adrfam": "IPv4", 00:20:08.592 "traddr": "10.0.0.1", 00:20:08.592 "trsvcid": "52556" 00:20:08.592 }, 00:20:08.592 "auth": { 00:20:08.592 "state": "completed", 00:20:08.592 "digest": "sha256", 00:20:08.592 "dhgroup": "ffdhe8192" 00:20:08.592 } 00:20:08.592 } 00:20:08.592 ]' 00:20:08.592 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.592 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.592 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.852 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.852 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.852 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.852 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.852 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.113 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:20:09.683 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.683 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:09.683 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.683 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.683 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.683 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.683 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.683 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.943 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.511 00:20:10.511 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.511 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.511 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.771 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.771 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.771 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.771 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.771 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.771 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.771 { 00:20:10.771 "cntlid": 45, 00:20:10.771 "qid": 0, 00:20:10.771 "state": "enabled", 00:20:10.771 "thread": "nvmf_tgt_poll_group_000", 00:20:10.771 "listen_address": { 00:20:10.771 "trtype": "TCP", 00:20:10.771 "adrfam": "IPv4", 00:20:10.771 "traddr": "10.0.0.2", 00:20:10.771 "trsvcid": "4420" 00:20:10.771 }, 00:20:10.771 "peer_address": { 00:20:10.771 "trtype": "TCP", 00:20:10.771 "adrfam": "IPv4", 00:20:10.771 "traddr": "10.0.0.1", 00:20:10.771 "trsvcid": "48022" 00:20:10.771 }, 00:20:10.771 "auth": { 00:20:10.771 "state": "completed", 00:20:10.771 "digest": "sha256", 00:20:10.771 "dhgroup": "ffdhe8192" 00:20:10.771 } 00:20:10.771 } 00:20:10.771 ]' 00:20:10.771 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.771 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.771 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.771 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.771 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.031 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.031 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.031 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.031 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.005 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.606 00:20:12.606 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.606 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.606 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.866 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.866 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.866 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.866 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.866 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.866 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.866 { 00:20:12.866 "cntlid": 47, 00:20:12.866 "qid": 0, 00:20:12.866 "state": "enabled", 00:20:12.866 "thread": "nvmf_tgt_poll_group_000", 00:20:12.866 "listen_address": { 00:20:12.866 "trtype": "TCP", 00:20:12.866 "adrfam": "IPv4", 00:20:12.866 "traddr": "10.0.0.2", 00:20:12.866 "trsvcid": "4420" 00:20:12.866 }, 00:20:12.866 "peer_address": { 00:20:12.866 "trtype": "TCP", 00:20:12.866 "adrfam": "IPv4", 00:20:12.866 "traddr": "10.0.0.1", 00:20:12.866 "trsvcid": "48052" 00:20:12.866 }, 00:20:12.866 "auth": { 00:20:12.866 "state": "completed", 00:20:12.866 "digest": "sha256", 00:20:12.866 "dhgroup": "ffdhe8192" 00:20:12.866 } 00:20:12.866 } 00:20:12.866 ]' 00:20:12.866 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.866 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.866 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.866 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.866 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.127 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.127 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.127 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.127 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.068 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.638 00:20:14.638 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.638 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.639 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.898 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.898 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.898 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.898 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.898 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.898 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.898 { 00:20:14.898 "cntlid": 49, 00:20:14.898 "qid": 0, 00:20:14.898 "state": "enabled", 00:20:14.898 "thread": "nvmf_tgt_poll_group_000", 00:20:14.898 "listen_address": { 00:20:14.898 "trtype": "TCP", 00:20:14.898 "adrfam": "IPv4", 00:20:14.898 "traddr": "10.0.0.2", 00:20:14.898 "trsvcid": "4420" 00:20:14.898 }, 00:20:14.898 "peer_address": { 00:20:14.898 "trtype": "TCP", 00:20:14.898 "adrfam": "IPv4", 00:20:14.898 "traddr": "10.0.0.1", 00:20:14.898 "trsvcid": "48086" 00:20:14.898 }, 00:20:14.898 "auth": { 00:20:14.898 "state": "completed", 00:20:14.898 "digest": "sha384", 00:20:14.898 "dhgroup": "null" 00:20:14.898 } 00:20:14.898 } 00:20:14.898 ]' 00:20:14.898 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.898 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.898 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.898 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:14.898 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.158 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.158 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.158 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.158 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.099 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.359 00:20:16.359 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.359 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.359 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.618 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.618 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.618 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.618 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.618 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.618 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.618 { 00:20:16.618 "cntlid": 51, 00:20:16.618 "qid": 0, 00:20:16.618 "state": "enabled", 00:20:16.618 "thread": "nvmf_tgt_poll_group_000", 00:20:16.618 "listen_address": { 00:20:16.618 "trtype": "TCP", 00:20:16.618 "adrfam": "IPv4", 00:20:16.618 "traddr": "10.0.0.2", 00:20:16.618 "trsvcid": "4420" 00:20:16.618 }, 00:20:16.618 "peer_address": { 00:20:16.618 "trtype": "TCP", 00:20:16.618 "adrfam": "IPv4", 00:20:16.618 "traddr": "10.0.0.1", 00:20:16.618 "trsvcid": "48100" 00:20:16.618 }, 00:20:16.618 "auth": { 00:20:16.618 "state": "completed", 00:20:16.618 "digest": "sha384", 00:20:16.618 "dhgroup": "null" 00:20:16.618 } 00:20:16.618 } 00:20:16.619 ]' 00:20:16.619 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.619 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.619 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.619 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:16.619 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.619 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.619 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.619 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.878 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:20:17.450 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.450 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:17.450 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.450 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.710 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.710 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.710 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.710 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.710 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.970 00:20:17.970 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.970 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.970 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.230 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.230 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.230 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.230 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.230 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.230 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.230 { 00:20:18.230 "cntlid": 53, 00:20:18.230 "qid": 0, 00:20:18.230 "state": "enabled", 00:20:18.230 "thread": "nvmf_tgt_poll_group_000", 00:20:18.230 "listen_address": { 00:20:18.230 "trtype": "TCP", 00:20:18.230 "adrfam": "IPv4", 00:20:18.230 "traddr": "10.0.0.2", 00:20:18.230 "trsvcid": "4420" 00:20:18.230 }, 00:20:18.230 "peer_address": { 00:20:18.230 "trtype": "TCP", 00:20:18.230 "adrfam": "IPv4", 00:20:18.230 "traddr": "10.0.0.1", 00:20:18.230 "trsvcid": "48134" 00:20:18.230 }, 00:20:18.230 "auth": { 00:20:18.230 "state": "completed", 00:20:18.230 "digest": "sha384", 00:20:18.230 "dhgroup": "null" 00:20:18.230 } 00:20:18.230 } 00:20:18.230 ]' 00:20:18.230 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.230 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.230 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.230 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:18.491 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.491 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.491 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.491 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.491 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.433 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.694 00:20:19.694 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.694 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.694 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.955 { 00:20:19.955 "cntlid": 55, 00:20:19.955 "qid": 0, 00:20:19.955 "state": "enabled", 00:20:19.955 "thread": "nvmf_tgt_poll_group_000", 00:20:19.955 "listen_address": { 00:20:19.955 "trtype": "TCP", 00:20:19.955 "adrfam": "IPv4", 00:20:19.955 "traddr": "10.0.0.2", 00:20:19.955 "trsvcid": "4420" 00:20:19.955 }, 00:20:19.955 "peer_address": { 00:20:19.955 "trtype": "TCP", 00:20:19.955 "adrfam": "IPv4", 00:20:19.955 "traddr": "10.0.0.1", 00:20:19.955 "trsvcid": "58494" 00:20:19.955 }, 00:20:19.955 "auth": { 00:20:19.955 "state": "completed", 00:20:19.955 "digest": "sha384", 00:20:19.955 "dhgroup": "null" 00:20:19.955 } 00:20:19.955 } 00:20:19.955 ]' 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.955 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.215 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.155 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.415 00:20:21.415 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.415 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.415 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.676 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.676 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.676 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.676 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.676 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.676 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.676 { 00:20:21.676 "cntlid": 57, 00:20:21.676 "qid": 0, 00:20:21.676 "state": "enabled", 00:20:21.676 "thread": "nvmf_tgt_poll_group_000", 00:20:21.676 "listen_address": { 00:20:21.676 "trtype": "TCP", 00:20:21.676 "adrfam": "IPv4", 00:20:21.676 "traddr": "10.0.0.2", 00:20:21.676 "trsvcid": "4420" 00:20:21.676 }, 00:20:21.676 "peer_address": { 00:20:21.676 "trtype": "TCP", 00:20:21.676 "adrfam": "IPv4", 00:20:21.676 "traddr": "10.0.0.1", 00:20:21.676 "trsvcid": "58506" 00:20:21.676 }, 00:20:21.676 "auth": { 00:20:21.676 "state": "completed", 00:20:21.676 "digest": "sha384", 00:20:21.676 "dhgroup": "ffdhe2048" 00:20:21.676 } 00:20:21.676 } 00:20:21.676 ]' 00:20:21.676 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.676 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.676 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.676 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.676 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.676 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.676 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.676 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.936 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:20:22.505 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.506 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:22.506 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.506 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.506 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.506 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.506 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.506 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.765 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.025 00:20:23.025 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.025 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.025 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.284 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.284 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.284 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.284 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.284 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.284 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.284 { 00:20:23.284 "cntlid": 59, 00:20:23.284 "qid": 0, 00:20:23.284 "state": "enabled", 00:20:23.284 "thread": "nvmf_tgt_poll_group_000", 00:20:23.284 "listen_address": { 00:20:23.284 "trtype": "TCP", 00:20:23.284 "adrfam": "IPv4", 00:20:23.284 "traddr": "10.0.0.2", 00:20:23.284 "trsvcid": "4420" 00:20:23.284 }, 00:20:23.284 "peer_address": { 00:20:23.284 "trtype": "TCP", 00:20:23.284 "adrfam": "IPv4", 00:20:23.284 "traddr": "10.0.0.1", 00:20:23.284 "trsvcid": "58530" 00:20:23.284 }, 00:20:23.284 "auth": { 00:20:23.284 "state": "completed", 00:20:23.284 "digest": "sha384", 00:20:23.284 "dhgroup": "ffdhe2048" 00:20:23.284 } 00:20:23.284 } 00:20:23.284 ]' 00:20:23.284 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.284 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.284 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.284 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.284 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.544 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.544 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.544 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.544 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.483 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.054 00:20:25.054 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.054 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.054 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.316 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.316 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.316 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.316 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.316 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.316 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.316 { 00:20:25.316 "cntlid": 61, 00:20:25.316 "qid": 0, 00:20:25.316 "state": "enabled", 00:20:25.316 "thread": "nvmf_tgt_poll_group_000", 00:20:25.316 "listen_address": { 00:20:25.316 "trtype": "TCP", 00:20:25.316 "adrfam": "IPv4", 00:20:25.316 "traddr": "10.0.0.2", 00:20:25.316 "trsvcid": "4420" 00:20:25.316 }, 00:20:25.316 "peer_address": { 00:20:25.316 "trtype": "TCP", 00:20:25.316 "adrfam": "IPv4", 00:20:25.316 "traddr": "10.0.0.1", 00:20:25.316 "trsvcid": "58540" 00:20:25.316 }, 00:20:25.316 "auth": { 00:20:25.316 "state": "completed", 00:20:25.316 "digest": "sha384", 00:20:25.316 "dhgroup": "ffdhe2048" 00:20:25.316 } 00:20:25.316 } 00:20:25.316 ]' 00:20:25.316 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.316 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.316 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.316 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:25.577 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.577 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.577 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.577 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.837 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:20:26.407 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.407 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:26.407 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.407 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.407 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.407 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.407 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:26.407 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.238 00:20:27.238 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.238 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.238 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.498 { 00:20:27.498 "cntlid": 63, 00:20:27.498 "qid": 0, 00:20:27.498 "state": "enabled", 00:20:27.498 "thread": "nvmf_tgt_poll_group_000", 00:20:27.498 "listen_address": { 00:20:27.498 "trtype": "TCP", 00:20:27.498 "adrfam": "IPv4", 00:20:27.498 "traddr": "10.0.0.2", 00:20:27.498 "trsvcid": "4420" 00:20:27.498 }, 00:20:27.498 "peer_address": { 00:20:27.498 "trtype": "TCP", 00:20:27.498 "adrfam": "IPv4", 00:20:27.498 "traddr": "10.0.0.1", 00:20:27.498 "trsvcid": "58576" 00:20:27.498 }, 00:20:27.498 "auth": { 00:20:27.498 "state": "completed", 00:20:27.498 "digest": "sha384", 00:20:27.498 "dhgroup": "ffdhe2048" 00:20:27.498 } 00:20:27.498 } 00:20:27.498 ]' 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.498 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.758 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:20:28.328 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.328 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:28.328 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.328 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.328 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.328 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.589 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.849 00:20:28.849 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.849 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.849 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.109 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.109 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.109 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.109 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.109 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.109 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.109 { 00:20:29.109 "cntlid": 65, 00:20:29.109 "qid": 0, 00:20:29.109 "state": "enabled", 00:20:29.109 "thread": "nvmf_tgt_poll_group_000", 00:20:29.109 "listen_address": { 00:20:29.109 "trtype": "TCP", 00:20:29.109 "adrfam": "IPv4", 00:20:29.109 "traddr": "10.0.0.2", 00:20:29.109 "trsvcid": "4420" 00:20:29.109 }, 00:20:29.109 "peer_address": { 00:20:29.109 "trtype": "TCP", 00:20:29.109 "adrfam": "IPv4", 00:20:29.109 "traddr": "10.0.0.1", 00:20:29.109 "trsvcid": "58606" 00:20:29.109 }, 00:20:29.109 "auth": { 00:20:29.109 "state": "completed", 00:20:29.109 "digest": "sha384", 00:20:29.109 "dhgroup": "ffdhe3072" 00:20:29.109 } 00:20:29.109 } 00:20:29.109 ]' 00:20:29.109 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.109 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.109 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.369 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.369 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.369 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.369 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.369 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.629 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:20:30.206 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.206 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:30.206 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.206 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.206 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.206 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.206 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.206 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.467 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.038 00:20:31.038 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.038 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.038 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.298 { 00:20:31.298 "cntlid": 67, 00:20:31.298 "qid": 0, 00:20:31.298 "state": "enabled", 00:20:31.298 "thread": "nvmf_tgt_poll_group_000", 00:20:31.298 "listen_address": { 00:20:31.298 "trtype": "TCP", 00:20:31.298 "adrfam": "IPv4", 00:20:31.298 "traddr": "10.0.0.2", 00:20:31.298 "trsvcid": "4420" 00:20:31.298 }, 00:20:31.298 "peer_address": { 00:20:31.298 "trtype": "TCP", 00:20:31.298 "adrfam": "IPv4", 00:20:31.298 "traddr": "10.0.0.1", 00:20:31.298 "trsvcid": "59422" 00:20:31.298 }, 00:20:31.298 "auth": { 00:20:31.298 "state": "completed", 00:20:31.298 "digest": "sha384", 00:20:31.298 "dhgroup": "ffdhe3072" 00:20:31.298 } 00:20:31.298 } 00:20:31.298 ]' 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.298 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.299 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.559 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.500 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.071 00:20:33.071 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.071 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.071 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.331 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.331 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.331 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.331 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.332 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.332 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.332 { 00:20:33.332 "cntlid": 69, 00:20:33.332 "qid": 0, 00:20:33.332 "state": "enabled", 00:20:33.332 "thread": "nvmf_tgt_poll_group_000", 00:20:33.332 "listen_address": { 00:20:33.332 "trtype": "TCP", 00:20:33.332 "adrfam": "IPv4", 00:20:33.332 "traddr": "10.0.0.2", 00:20:33.332 "trsvcid": "4420" 00:20:33.332 }, 00:20:33.332 "peer_address": { 00:20:33.332 "trtype": "TCP", 00:20:33.332 "adrfam": "IPv4", 00:20:33.332 "traddr": "10.0.0.1", 00:20:33.332 "trsvcid": "59454" 00:20:33.332 }, 00:20:33.332 "auth": { 00:20:33.332 "state": "completed", 00:20:33.332 "digest": "sha384", 00:20:33.332 "dhgroup": "ffdhe3072" 00:20:33.332 } 00:20:33.332 } 00:20:33.332 ]' 00:20:33.332 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.332 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.332 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.332 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:33.332 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.592 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.592 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.592 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.592 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.532 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.100 00:20:35.100 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.100 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.100 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.359 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.359 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.359 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.359 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.359 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.359 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.359 { 00:20:35.359 "cntlid": 71, 00:20:35.359 "qid": 0, 00:20:35.359 "state": "enabled", 00:20:35.359 "thread": "nvmf_tgt_poll_group_000", 00:20:35.360 "listen_address": { 00:20:35.360 "trtype": "TCP", 00:20:35.360 "adrfam": "IPv4", 00:20:35.360 "traddr": "10.0.0.2", 00:20:35.360 "trsvcid": "4420" 00:20:35.360 }, 00:20:35.360 "peer_address": { 00:20:35.360 "trtype": "TCP", 00:20:35.360 "adrfam": "IPv4", 00:20:35.360 "traddr": "10.0.0.1", 00:20:35.360 "trsvcid": "59494" 00:20:35.360 }, 00:20:35.360 "auth": { 00:20:35.360 "state": "completed", 00:20:35.360 "digest": "sha384", 00:20:35.360 "dhgroup": "ffdhe3072" 00:20:35.360 } 00:20:35.360 } 00:20:35.360 ]' 00:20:35.360 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.360 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.360 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.620 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.620 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.620 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.620 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.620 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.620 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.559 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.560 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.560 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.819 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.108 { 00:20:37.108 "cntlid": 73, 00:20:37.108 "qid": 0, 00:20:37.108 "state": "enabled", 00:20:37.108 "thread": "nvmf_tgt_poll_group_000", 00:20:37.108 "listen_address": { 00:20:37.108 "trtype": "TCP", 00:20:37.108 "adrfam": "IPv4", 00:20:37.108 "traddr": "10.0.0.2", 00:20:37.108 "trsvcid": "4420" 00:20:37.108 }, 00:20:37.108 "peer_address": { 00:20:37.108 "trtype": "TCP", 00:20:37.108 "adrfam": "IPv4", 00:20:37.108 "traddr": "10.0.0.1", 00:20:37.108 "trsvcid": "59524" 00:20:37.108 }, 00:20:37.108 "auth": { 00:20:37.108 "state": "completed", 00:20:37.108 "digest": "sha384", 00:20:37.108 "dhgroup": "ffdhe4096" 00:20:37.108 } 00:20:37.108 } 00:20:37.108 ]' 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.108 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.398 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.398 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.398 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.398 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.398 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.398 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.339 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.600 00:20:38.600 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.600 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.600 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.860 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.860 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.860 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.860 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.860 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.860 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.860 { 00:20:38.860 "cntlid": 75, 00:20:38.860 "qid": 0, 00:20:38.860 "state": "enabled", 00:20:38.860 "thread": "nvmf_tgt_poll_group_000", 00:20:38.860 "listen_address": { 00:20:38.860 "trtype": "TCP", 00:20:38.860 "adrfam": "IPv4", 00:20:38.860 "traddr": "10.0.0.2", 00:20:38.860 "trsvcid": "4420" 00:20:38.860 }, 00:20:38.860 "peer_address": { 00:20:38.860 "trtype": "TCP", 00:20:38.860 "adrfam": "IPv4", 00:20:38.860 "traddr": "10.0.0.1", 00:20:38.860 "trsvcid": "59548" 00:20:38.860 }, 00:20:38.860 "auth": { 00:20:38.860 "state": "completed", 00:20:38.860 "digest": "sha384", 00:20:38.860 "dhgroup": "ffdhe4096" 00:20:38.860 } 00:20:38.860 } 00:20:38.860 ]' 00:20:38.860 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.860 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.860 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.860 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.860 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.120 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.120 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.120 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.120 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.062 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.322 00:20:40.322 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.322 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.322 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.582 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.582 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.582 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.582 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.582 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.582 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.582 { 00:20:40.582 "cntlid": 77, 00:20:40.582 "qid": 0, 00:20:40.582 "state": "enabled", 00:20:40.582 "thread": "nvmf_tgt_poll_group_000", 00:20:40.582 "listen_address": { 00:20:40.582 "trtype": "TCP", 00:20:40.582 "adrfam": "IPv4", 00:20:40.582 "traddr": "10.0.0.2", 00:20:40.582 "trsvcid": "4420" 00:20:40.582 }, 00:20:40.582 "peer_address": { 00:20:40.582 "trtype": "TCP", 00:20:40.582 "adrfam": "IPv4", 00:20:40.582 "traddr": "10.0.0.1", 00:20:40.582 "trsvcid": "49970" 00:20:40.582 }, 00:20:40.582 "auth": { 00:20:40.582 "state": "completed", 00:20:40.582 "digest": "sha384", 00:20:40.582 "dhgroup": "ffdhe4096" 00:20:40.582 } 00:20:40.582 } 00:20:40.582 ]' 00:20:40.582 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.841 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.841 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.841 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.841 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.841 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.841 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.841 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.101 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:20:41.670 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.670 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:41.670 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.670 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.670 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.670 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.670 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.670 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.930 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.501 00:20:42.501 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.501 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.501 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.760 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.760 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.760 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.760 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.760 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.760 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.760 { 00:20:42.760 "cntlid": 79, 00:20:42.760 "qid": 0, 00:20:42.760 "state": "enabled", 00:20:42.760 "thread": "nvmf_tgt_poll_group_000", 00:20:42.760 "listen_address": { 00:20:42.760 "trtype": "TCP", 00:20:42.760 "adrfam": "IPv4", 00:20:42.760 "traddr": "10.0.0.2", 00:20:42.760 "trsvcid": "4420" 00:20:42.760 }, 00:20:42.760 "peer_address": { 00:20:42.760 "trtype": "TCP", 00:20:42.760 "adrfam": "IPv4", 00:20:42.760 "traddr": "10.0.0.1", 00:20:42.760 "trsvcid": "49990" 00:20:42.760 }, 00:20:42.760 "auth": { 00:20:42.760 "state": "completed", 00:20:42.760 "digest": "sha384", 00:20:42.760 "dhgroup": "ffdhe4096" 00:20:42.760 } 00:20:42.760 } 00:20:42.760 ]' 00:20:42.760 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.760 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.760 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.760 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.760 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.020 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.020 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.020 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.020 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.977 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.547 00:20:44.547 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.547 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.547 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.547 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.547 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.547 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.547 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.547 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.547 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.547 { 00:20:44.547 "cntlid": 81, 00:20:44.547 "qid": 0, 00:20:44.547 "state": "enabled", 00:20:44.547 "thread": "nvmf_tgt_poll_group_000", 00:20:44.547 "listen_address": { 00:20:44.547 "trtype": "TCP", 00:20:44.547 "adrfam": "IPv4", 00:20:44.547 "traddr": "10.0.0.2", 00:20:44.547 "trsvcid": "4420" 00:20:44.547 }, 00:20:44.547 "peer_address": { 00:20:44.547 "trtype": "TCP", 00:20:44.547 "adrfam": "IPv4", 00:20:44.547 "traddr": "10.0.0.1", 00:20:44.547 "trsvcid": "50012" 00:20:44.547 }, 00:20:44.547 "auth": { 00:20:44.547 "state": "completed", 00:20:44.547 "digest": "sha384", 00:20:44.547 "dhgroup": "ffdhe6144" 00:20:44.547 } 00:20:44.547 } 00:20:44.547 ]' 00:20:44.547 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.808 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.808 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.808 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.808 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.808 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.808 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.808 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.069 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:20:45.640 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.640 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:45.640 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.640 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.640 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.640 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.640 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.640 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.901 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.472 00:20:46.472 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.472 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.472 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.472 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.472 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.472 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.472 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.472 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.472 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.472 { 00:20:46.472 "cntlid": 83, 00:20:46.472 "qid": 0, 00:20:46.472 "state": "enabled", 00:20:46.472 "thread": "nvmf_tgt_poll_group_000", 00:20:46.472 "listen_address": { 00:20:46.472 "trtype": "TCP", 00:20:46.472 "adrfam": "IPv4", 00:20:46.472 "traddr": "10.0.0.2", 00:20:46.472 "trsvcid": "4420" 00:20:46.472 }, 00:20:46.472 "peer_address": { 00:20:46.472 "trtype": "TCP", 00:20:46.472 "adrfam": "IPv4", 00:20:46.472 "traddr": "10.0.0.1", 00:20:46.472 "trsvcid": "50026" 00:20:46.472 }, 00:20:46.472 "auth": { 00:20:46.472 "state": "completed", 00:20:46.472 "digest": "sha384", 00:20:46.472 "dhgroup": "ffdhe6144" 00:20:46.472 } 00:20:46.472 } 00:20:46.472 ]' 00:20:46.472 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.732 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.732 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.732 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.732 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.732 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.732 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.732 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.993 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:20:47.564 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.564 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:47.564 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.564 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.564 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.564 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.564 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.564 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.824 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.394 00:20:48.394 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.394 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.394 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.394 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.394 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.394 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.394 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.394 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.394 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.394 { 00:20:48.394 "cntlid": 85, 00:20:48.394 "qid": 0, 00:20:48.394 "state": "enabled", 00:20:48.394 "thread": "nvmf_tgt_poll_group_000", 00:20:48.394 "listen_address": { 00:20:48.394 "trtype": "TCP", 00:20:48.394 "adrfam": "IPv4", 00:20:48.394 "traddr": "10.0.0.2", 00:20:48.394 "trsvcid": "4420" 00:20:48.394 }, 00:20:48.394 "peer_address": { 00:20:48.394 "trtype": "TCP", 00:20:48.394 "adrfam": "IPv4", 00:20:48.394 "traddr": "10.0.0.1", 00:20:48.394 "trsvcid": "50054" 00:20:48.394 }, 00:20:48.394 "auth": { 00:20:48.394 "state": "completed", 00:20:48.394 "digest": "sha384", 00:20:48.394 "dhgroup": "ffdhe6144" 00:20:48.394 } 00:20:48.394 } 00:20:48.394 ]' 00:20:48.394 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.654 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.654 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.654 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.654 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.654 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.654 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.654 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.915 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:20:49.485 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.485 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:49.485 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.485 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.485 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.485 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.485 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.485 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.744 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:49.744 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.744 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.744 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:49.744 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:49.744 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.744 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:49.744 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.744 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.744 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.744 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.744 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.685 00:20:50.685 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.685 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.685 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.685 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.685 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.685 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.685 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.685 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.685 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.685 { 00:20:50.685 "cntlid": 87, 00:20:50.685 "qid": 0, 00:20:50.685 "state": "enabled", 00:20:50.685 "thread": "nvmf_tgt_poll_group_000", 00:20:50.685 "listen_address": { 00:20:50.685 "trtype": "TCP", 00:20:50.685 "adrfam": "IPv4", 00:20:50.685 "traddr": "10.0.0.2", 00:20:50.685 "trsvcid": "4420" 00:20:50.685 }, 00:20:50.685 "peer_address": { 00:20:50.685 "trtype": "TCP", 00:20:50.685 "adrfam": "IPv4", 00:20:50.685 "traddr": "10.0.0.1", 00:20:50.685 "trsvcid": "55428" 00:20:50.685 }, 00:20:50.685 "auth": { 00:20:50.685 "state": "completed", 00:20:50.685 "digest": "sha384", 00:20:50.685 "dhgroup": "ffdhe6144" 00:20:50.685 } 00:20:50.685 } 00:20:50.685 ]' 00:20:50.685 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.685 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.685 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.685 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.685 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.945 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.945 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.945 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.945 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:20:51.886 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.886 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:51.886 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.886 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.886 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.887 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.887 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.887 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.887 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.887 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.827 00:20:52.827 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.827 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.827 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.089 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.089 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.089 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.089 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.089 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.089 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.089 { 00:20:53.089 "cntlid": 89, 00:20:53.089 "qid": 0, 00:20:53.089 "state": "enabled", 00:20:53.089 "thread": "nvmf_tgt_poll_group_000", 00:20:53.089 "listen_address": { 00:20:53.089 "trtype": "TCP", 00:20:53.089 "adrfam": "IPv4", 00:20:53.089 "traddr": "10.0.0.2", 00:20:53.089 "trsvcid": "4420" 00:20:53.089 }, 00:20:53.089 "peer_address": { 00:20:53.089 "trtype": "TCP", 00:20:53.089 "adrfam": "IPv4", 00:20:53.089 "traddr": "10.0.0.1", 00:20:53.089 "trsvcid": "55452" 00:20:53.089 }, 00:20:53.089 "auth": { 00:20:53.089 "state": "completed", 00:20:53.089 "digest": "sha384", 00:20:53.089 "dhgroup": "ffdhe8192" 00:20:53.089 } 00:20:53.089 } 00:20:53.089 ]' 00:20:53.089 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.089 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.089 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.350 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.350 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.350 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.350 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.350 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.350 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:20:54.290 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.291 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:54.291 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.291 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.291 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.291 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.291 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.291 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.860 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.430 00:20:55.430 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.430 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.430 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.690 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.690 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.690 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.690 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.690 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.690 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.690 { 00:20:55.690 "cntlid": 91, 00:20:55.690 "qid": 0, 00:20:55.690 "state": "enabled", 00:20:55.690 "thread": "nvmf_tgt_poll_group_000", 00:20:55.690 "listen_address": { 00:20:55.690 "trtype": "TCP", 00:20:55.690 "adrfam": "IPv4", 00:20:55.690 "traddr": "10.0.0.2", 00:20:55.690 "trsvcid": "4420" 00:20:55.690 }, 00:20:55.690 "peer_address": { 00:20:55.690 "trtype": "TCP", 00:20:55.690 "adrfam": "IPv4", 00:20:55.690 "traddr": "10.0.0.1", 00:20:55.690 "trsvcid": "55490" 00:20:55.690 }, 00:20:55.690 "auth": { 00:20:55.690 "state": "completed", 00:20:55.690 "digest": "sha384", 00:20:55.690 "dhgroup": "ffdhe8192" 00:20:55.691 } 00:20:55.691 } 00:20:55.691 ]' 00:20:55.691 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.691 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.691 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.691 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.691 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.691 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.691 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.691 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.952 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:20:56.522 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.522 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:56.522 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.522 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.522 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.522 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.522 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.522 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.782 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.352 00:20:57.352 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.352 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.352 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.612 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.612 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.612 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.612 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.612 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.612 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.612 { 00:20:57.612 "cntlid": 93, 00:20:57.612 "qid": 0, 00:20:57.612 "state": "enabled", 00:20:57.612 "thread": "nvmf_tgt_poll_group_000", 00:20:57.612 "listen_address": { 00:20:57.612 "trtype": "TCP", 00:20:57.612 "adrfam": "IPv4", 00:20:57.612 "traddr": "10.0.0.2", 00:20:57.612 "trsvcid": "4420" 00:20:57.612 }, 00:20:57.612 "peer_address": { 00:20:57.612 "trtype": "TCP", 00:20:57.612 "adrfam": "IPv4", 00:20:57.612 "traddr": "10.0.0.1", 00:20:57.612 "trsvcid": "55520" 00:20:57.612 }, 00:20:57.612 "auth": { 00:20:57.612 "state": "completed", 00:20:57.612 "digest": "sha384", 00:20:57.612 "dhgroup": "ffdhe8192" 00:20:57.612 } 00:20:57.612 } 00:20:57.612 ]' 00:20:57.612 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.612 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.612 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.612 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.612 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.873 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.873 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.873 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.873 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:20:58.813 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.813 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:58.813 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.813 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.813 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.813 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.813 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.813 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.813 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.384 00:20:59.644 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.644 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.644 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.644 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.644 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.644 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.644 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.644 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.644 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.644 { 00:20:59.644 "cntlid": 95, 00:20:59.644 "qid": 0, 00:20:59.644 "state": "enabled", 00:20:59.644 "thread": "nvmf_tgt_poll_group_000", 00:20:59.644 "listen_address": { 00:20:59.644 "trtype": "TCP", 00:20:59.644 "adrfam": "IPv4", 00:20:59.644 "traddr": "10.0.0.2", 00:20:59.644 "trsvcid": "4420" 00:20:59.644 }, 00:20:59.644 "peer_address": { 00:20:59.644 "trtype": "TCP", 00:20:59.644 "adrfam": "IPv4", 00:20:59.644 "traddr": "10.0.0.1", 00:20:59.644 "trsvcid": "33712" 00:20:59.644 }, 00:20:59.644 "auth": { 00:20:59.644 "state": "completed", 00:20:59.644 "digest": "sha384", 00:20:59.644 "dhgroup": "ffdhe8192" 00:20:59.644 } 00:20:59.644 } 00:20:59.644 ]' 00:20:59.644 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.904 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.904 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.904 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.904 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.904 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.904 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.904 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.196 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:21:00.766 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.766 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:00.766 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.766 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.766 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.766 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:00.766 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.766 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.766 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.766 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.026 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.287 00:21:01.287 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.287 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.287 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.547 { 00:21:01.547 "cntlid": 97, 00:21:01.547 "qid": 0, 00:21:01.547 "state": "enabled", 00:21:01.547 "thread": "nvmf_tgt_poll_group_000", 00:21:01.547 "listen_address": { 00:21:01.547 "trtype": "TCP", 00:21:01.547 "adrfam": "IPv4", 00:21:01.547 "traddr": "10.0.0.2", 00:21:01.547 "trsvcid": "4420" 00:21:01.547 }, 00:21:01.547 "peer_address": { 00:21:01.547 "trtype": "TCP", 00:21:01.547 "adrfam": "IPv4", 00:21:01.547 "traddr": "10.0.0.1", 00:21:01.547 "trsvcid": "33724" 00:21:01.547 }, 00:21:01.547 "auth": { 00:21:01.547 "state": "completed", 00:21:01.547 "digest": "sha512", 00:21:01.547 "dhgroup": "null" 00:21:01.547 } 00:21:01.547 } 00:21:01.547 ]' 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.547 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.807 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:21:02.391 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.391 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:02.391 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.391 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.391 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.391 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.391 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.391 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.723 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.984 00:21:02.984 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.984 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.984 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.245 { 00:21:03.245 "cntlid": 99, 00:21:03.245 "qid": 0, 00:21:03.245 "state": "enabled", 00:21:03.245 "thread": "nvmf_tgt_poll_group_000", 00:21:03.245 "listen_address": { 00:21:03.245 "trtype": "TCP", 00:21:03.245 "adrfam": "IPv4", 00:21:03.245 "traddr": "10.0.0.2", 00:21:03.245 "trsvcid": "4420" 00:21:03.245 }, 00:21:03.245 "peer_address": { 00:21:03.245 "trtype": "TCP", 00:21:03.245 "adrfam": "IPv4", 00:21:03.245 "traddr": "10.0.0.1", 00:21:03.245 "trsvcid": "33744" 00:21:03.245 }, 00:21:03.245 "auth": { 00:21:03.245 "state": "completed", 00:21:03.245 "digest": "sha512", 00:21:03.245 "dhgroup": "null" 00:21:03.245 } 00:21:03.245 } 00:21:03.245 ]' 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.245 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.505 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:21:04.077 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.077 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:04.077 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.077 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.338 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.338 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.338 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.338 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.909 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.909 00:21:05.169 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.169 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.169 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.169 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.169 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.169 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.169 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.169 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.169 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.169 { 00:21:05.169 "cntlid": 101, 00:21:05.169 "qid": 0, 00:21:05.169 "state": "enabled", 00:21:05.169 "thread": "nvmf_tgt_poll_group_000", 00:21:05.169 "listen_address": { 00:21:05.169 "trtype": "TCP", 00:21:05.169 "adrfam": "IPv4", 00:21:05.169 "traddr": "10.0.0.2", 00:21:05.169 "trsvcid": "4420" 00:21:05.169 }, 00:21:05.169 "peer_address": { 00:21:05.169 "trtype": "TCP", 00:21:05.169 "adrfam": "IPv4", 00:21:05.169 "traddr": "10.0.0.1", 00:21:05.169 "trsvcid": "33776" 00:21:05.169 }, 00:21:05.169 "auth": { 00:21:05.169 "state": "completed", 00:21:05.169 "digest": "sha512", 00:21:05.169 "dhgroup": "null" 00:21:05.169 } 00:21:05.169 } 00:21:05.169 ]' 00:21:05.169 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.429 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.429 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.429 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:05.429 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.429 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.429 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.429 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.689 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:21:06.259 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.259 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:06.259 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.259 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.259 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.259 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.259 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.259 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.519 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.779 00:21:06.779 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.779 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.779 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.039 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.039 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.039 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.039 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.039 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.039 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.039 { 00:21:07.039 "cntlid": 103, 00:21:07.039 "qid": 0, 00:21:07.039 "state": "enabled", 00:21:07.039 "thread": "nvmf_tgt_poll_group_000", 00:21:07.039 "listen_address": { 00:21:07.039 "trtype": "TCP", 00:21:07.039 "adrfam": "IPv4", 00:21:07.039 "traddr": "10.0.0.2", 00:21:07.039 "trsvcid": "4420" 00:21:07.039 }, 00:21:07.039 "peer_address": { 00:21:07.040 "trtype": "TCP", 00:21:07.040 "adrfam": "IPv4", 00:21:07.040 "traddr": "10.0.0.1", 00:21:07.040 "trsvcid": "33806" 00:21:07.040 }, 00:21:07.040 "auth": { 00:21:07.040 "state": "completed", 00:21:07.040 "digest": "sha512", 00:21:07.040 "dhgroup": "null" 00:21:07.040 } 00:21:07.040 } 00:21:07.040 ]' 00:21:07.040 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.040 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.040 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.040 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:07.040 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.299 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.299 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.299 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.300 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.240 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.501 00:21:08.501 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.501 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.501 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.762 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.762 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.762 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.762 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.762 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.762 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.762 { 00:21:08.762 "cntlid": 105, 00:21:08.762 "qid": 0, 00:21:08.762 "state": "enabled", 00:21:08.762 "thread": "nvmf_tgt_poll_group_000", 00:21:08.762 "listen_address": { 00:21:08.762 "trtype": "TCP", 00:21:08.762 "adrfam": "IPv4", 00:21:08.762 "traddr": "10.0.0.2", 00:21:08.762 "trsvcid": "4420" 00:21:08.762 }, 00:21:08.762 "peer_address": { 00:21:08.762 "trtype": "TCP", 00:21:08.762 "adrfam": "IPv4", 00:21:08.762 "traddr": "10.0.0.1", 00:21:08.762 "trsvcid": "33838" 00:21:08.762 }, 00:21:08.762 "auth": { 00:21:08.762 "state": "completed", 00:21:08.762 "digest": "sha512", 00:21:08.762 "dhgroup": "ffdhe2048" 00:21:08.762 } 00:21:08.762 } 00:21:08.762 ]' 00:21:08.762 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.762 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.762 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.762 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.762 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.022 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.022 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.022 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.022 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.962 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.221 00:21:10.221 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.221 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.221 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.481 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.481 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.481 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.481 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.481 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.481 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.481 { 00:21:10.481 "cntlid": 107, 00:21:10.481 "qid": 0, 00:21:10.481 "state": "enabled", 00:21:10.481 "thread": "nvmf_tgt_poll_group_000", 00:21:10.481 "listen_address": { 00:21:10.481 "trtype": "TCP", 00:21:10.481 "adrfam": "IPv4", 00:21:10.481 "traddr": "10.0.0.2", 00:21:10.481 "trsvcid": "4420" 00:21:10.481 }, 00:21:10.481 "peer_address": { 00:21:10.481 "trtype": "TCP", 00:21:10.481 "adrfam": "IPv4", 00:21:10.481 "traddr": "10.0.0.1", 00:21:10.481 "trsvcid": "35182" 00:21:10.481 }, 00:21:10.481 "auth": { 00:21:10.481 "state": "completed", 00:21:10.481 "digest": "sha512", 00:21:10.481 "dhgroup": "ffdhe2048" 00:21:10.481 } 00:21:10.481 } 00:21:10.481 ]' 00:21:10.481 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.481 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.481 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.481 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.481 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.742 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.742 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.742 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.742 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.682 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.942 00:21:11.942 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.942 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.942 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.202 { 00:21:12.202 "cntlid": 109, 00:21:12.202 "qid": 0, 00:21:12.202 "state": "enabled", 00:21:12.202 "thread": "nvmf_tgt_poll_group_000", 00:21:12.202 "listen_address": { 00:21:12.202 "trtype": "TCP", 00:21:12.202 "adrfam": "IPv4", 00:21:12.202 "traddr": "10.0.0.2", 00:21:12.202 "trsvcid": "4420" 00:21:12.202 }, 00:21:12.202 "peer_address": { 00:21:12.202 "trtype": "TCP", 00:21:12.202 "adrfam": "IPv4", 00:21:12.202 "traddr": "10.0.0.1", 00:21:12.202 "trsvcid": "35206" 00:21:12.202 }, 00:21:12.202 "auth": { 00:21:12.202 "state": "completed", 00:21:12.202 "digest": "sha512", 00:21:12.202 "dhgroup": "ffdhe2048" 00:21:12.202 } 00:21:12.202 } 00:21:12.202 ]' 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.202 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.462 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.402 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.663 00:21:13.663 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.663 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.663 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.924 { 00:21:13.924 "cntlid": 111, 00:21:13.924 "qid": 0, 00:21:13.924 "state": "enabled", 00:21:13.924 "thread": "nvmf_tgt_poll_group_000", 00:21:13.924 "listen_address": { 00:21:13.924 "trtype": "TCP", 00:21:13.924 "adrfam": "IPv4", 00:21:13.924 "traddr": "10.0.0.2", 00:21:13.924 "trsvcid": "4420" 00:21:13.924 }, 00:21:13.924 "peer_address": { 00:21:13.924 "trtype": "TCP", 00:21:13.924 "adrfam": "IPv4", 00:21:13.924 "traddr": "10.0.0.1", 00:21:13.924 "trsvcid": "35232" 00:21:13.924 }, 00:21:13.924 "auth": { 00:21:13.924 "state": "completed", 00:21:13.924 "digest": "sha512", 00:21:13.924 "dhgroup": "ffdhe2048" 00:21:13.924 } 00:21:13.924 } 00:21:13.924 ]' 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.924 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.210 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.151 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.411 00:21:15.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.672 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.672 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.672 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.672 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.672 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.672 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.672 { 00:21:15.672 "cntlid": 113, 00:21:15.672 "qid": 0, 00:21:15.672 "state": "enabled", 00:21:15.672 "thread": "nvmf_tgt_poll_group_000", 00:21:15.672 "listen_address": { 00:21:15.672 "trtype": "TCP", 00:21:15.672 "adrfam": "IPv4", 00:21:15.672 "traddr": "10.0.0.2", 00:21:15.672 "trsvcid": "4420" 00:21:15.672 }, 00:21:15.672 "peer_address": { 00:21:15.672 "trtype": "TCP", 00:21:15.672 "adrfam": "IPv4", 00:21:15.672 "traddr": "10.0.0.1", 00:21:15.672 "trsvcid": "35256" 00:21:15.672 }, 00:21:15.672 "auth": { 00:21:15.672 "state": "completed", 00:21:15.672 "digest": "sha512", 00:21:15.672 "dhgroup": "ffdhe3072" 00:21:15.672 } 00:21:15.672 } 00:21:15.672 ]' 00:21:15.672 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.672 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.672 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.672 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.672 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.672 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.672 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.672 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.932 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:21:16.502 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.502 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:16.502 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.502 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.502 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.502 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.502 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.502 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.764 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:16.764 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.764 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.765 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:16.765 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:16.765 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.765 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.765 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.765 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.765 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.765 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.765 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.025 00:21:17.025 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.025 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.025 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.286 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.286 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.286 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.286 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.286 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.286 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.286 { 00:21:17.286 "cntlid": 115, 00:21:17.286 "qid": 0, 00:21:17.286 "state": "enabled", 00:21:17.286 "thread": "nvmf_tgt_poll_group_000", 00:21:17.286 "listen_address": { 00:21:17.286 "trtype": "TCP", 00:21:17.286 "adrfam": "IPv4", 00:21:17.286 "traddr": "10.0.0.2", 00:21:17.286 "trsvcid": "4420" 00:21:17.286 }, 00:21:17.286 "peer_address": { 00:21:17.286 "trtype": "TCP", 00:21:17.286 "adrfam": "IPv4", 00:21:17.286 "traddr": "10.0.0.1", 00:21:17.286 "trsvcid": "35264" 00:21:17.286 }, 00:21:17.286 "auth": { 00:21:17.286 "state": "completed", 00:21:17.286 "digest": "sha512", 00:21:17.286 "dhgroup": "ffdhe3072" 00:21:17.286 } 00:21:17.286 } 00:21:17.286 ]' 00:21:17.286 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.286 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.286 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.546 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.547 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.547 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.547 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.547 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.807 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:21:18.378 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.378 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:18.378 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.378 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.378 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.378 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.378 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:18.378 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:18.638 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:18.638 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.638 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:18.638 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:18.638 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:18.638 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.638 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.638 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.638 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.639 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.639 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.639 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.899 00:21:18.899 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.899 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.899 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.160 { 00:21:19.160 "cntlid": 117, 00:21:19.160 "qid": 0, 00:21:19.160 "state": "enabled", 00:21:19.160 "thread": "nvmf_tgt_poll_group_000", 00:21:19.160 "listen_address": { 00:21:19.160 "trtype": "TCP", 00:21:19.160 "adrfam": "IPv4", 00:21:19.160 "traddr": "10.0.0.2", 00:21:19.160 "trsvcid": "4420" 00:21:19.160 }, 00:21:19.160 "peer_address": { 00:21:19.160 "trtype": "TCP", 00:21:19.160 "adrfam": "IPv4", 00:21:19.160 "traddr": "10.0.0.1", 00:21:19.160 "trsvcid": "35286" 00:21:19.160 }, 00:21:19.160 "auth": { 00:21:19.160 "state": "completed", 00:21:19.160 "digest": "sha512", 00:21:19.160 "dhgroup": "ffdhe3072" 00:21:19.160 } 00:21:19.160 } 00:21:19.160 ]' 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.160 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.421 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:21:19.992 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.992 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:19.992 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.992 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.992 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.992 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.992 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.992 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.254 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.514 00:21:20.514 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.514 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.514 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.774 { 00:21:20.774 "cntlid": 119, 00:21:20.774 "qid": 0, 00:21:20.774 "state": "enabled", 00:21:20.774 "thread": "nvmf_tgt_poll_group_000", 00:21:20.774 "listen_address": { 00:21:20.774 "trtype": "TCP", 00:21:20.774 "adrfam": "IPv4", 00:21:20.774 "traddr": "10.0.0.2", 00:21:20.774 "trsvcid": "4420" 00:21:20.774 }, 00:21:20.774 "peer_address": { 00:21:20.774 "trtype": "TCP", 00:21:20.774 "adrfam": "IPv4", 00:21:20.774 "traddr": "10.0.0.1", 00:21:20.774 "trsvcid": "43378" 00:21:20.774 }, 00:21:20.774 "auth": { 00:21:20.774 "state": "completed", 00:21:20.774 "digest": "sha512", 00:21:20.774 "dhgroup": "ffdhe3072" 00:21:20.774 } 00:21:20.774 } 00:21:20.774 ]' 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.774 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.034 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.976 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.237 00:21:22.237 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.237 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.237 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.498 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.498 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.498 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.498 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.498 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.498 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.498 { 00:21:22.498 "cntlid": 121, 00:21:22.498 "qid": 0, 00:21:22.498 "state": "enabled", 00:21:22.498 "thread": "nvmf_tgt_poll_group_000", 00:21:22.498 "listen_address": { 00:21:22.498 "trtype": "TCP", 00:21:22.498 "adrfam": "IPv4", 00:21:22.498 "traddr": "10.0.0.2", 00:21:22.498 "trsvcid": "4420" 00:21:22.498 }, 00:21:22.498 "peer_address": { 00:21:22.498 "trtype": "TCP", 00:21:22.498 "adrfam": "IPv4", 00:21:22.498 "traddr": "10.0.0.1", 00:21:22.498 "trsvcid": "43416" 00:21:22.498 }, 00:21:22.498 "auth": { 00:21:22.498 "state": "completed", 00:21:22.498 "digest": "sha512", 00:21:22.498 "dhgroup": "ffdhe4096" 00:21:22.498 } 00:21:22.498 } 00:21:22.498 ]' 00:21:22.498 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.498 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.498 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.758 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.758 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.758 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.758 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.758 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.758 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:21:23.697 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.697 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:23.697 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.697 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.697 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.697 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.697 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.697 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.697 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:23.697 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.697 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.697 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:23.698 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:23.698 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.698 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.698 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.698 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.698 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.698 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.698 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.957 00:21:24.218 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.218 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.218 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.218 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.218 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.218 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.218 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.218 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.218 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.218 { 00:21:24.218 "cntlid": 123, 00:21:24.218 "qid": 0, 00:21:24.218 "state": "enabled", 00:21:24.218 "thread": "nvmf_tgt_poll_group_000", 00:21:24.218 "listen_address": { 00:21:24.218 "trtype": "TCP", 00:21:24.218 "adrfam": "IPv4", 00:21:24.218 "traddr": "10.0.0.2", 00:21:24.218 "trsvcid": "4420" 00:21:24.218 }, 00:21:24.218 "peer_address": { 00:21:24.218 "trtype": "TCP", 00:21:24.218 "adrfam": "IPv4", 00:21:24.218 "traddr": "10.0.0.1", 00:21:24.218 "trsvcid": "43448" 00:21:24.218 }, 00:21:24.218 "auth": { 00:21:24.218 "state": "completed", 00:21:24.218 "digest": "sha512", 00:21:24.218 "dhgroup": "ffdhe4096" 00:21:24.218 } 00:21:24.218 } 00:21:24.218 ]' 00:21:24.218 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.478 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.478 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.478 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.478 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.478 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.478 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.478 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.738 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:21:25.308 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.308 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:25.308 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.308 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.308 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.308 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.308 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.308 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.568 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.828 00:21:25.828 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.828 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.828 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.089 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.089 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.089 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.089 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.089 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.089 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.089 { 00:21:26.089 "cntlid": 125, 00:21:26.089 "qid": 0, 00:21:26.089 "state": "enabled", 00:21:26.089 "thread": "nvmf_tgt_poll_group_000", 00:21:26.089 "listen_address": { 00:21:26.089 "trtype": "TCP", 00:21:26.089 "adrfam": "IPv4", 00:21:26.089 "traddr": "10.0.0.2", 00:21:26.089 "trsvcid": "4420" 00:21:26.089 }, 00:21:26.089 "peer_address": { 00:21:26.089 "trtype": "TCP", 00:21:26.089 "adrfam": "IPv4", 00:21:26.089 "traddr": "10.0.0.1", 00:21:26.089 "trsvcid": "43478" 00:21:26.089 }, 00:21:26.089 "auth": { 00:21:26.089 "state": "completed", 00:21:26.089 "digest": "sha512", 00:21:26.089 "dhgroup": "ffdhe4096" 00:21:26.089 } 00:21:26.089 } 00:21:26.089 ]' 00:21:26.089 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.089 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.089 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.349 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.349 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.349 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.349 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.349 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.610 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:21:27.203 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.203 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:27.203 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.203 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.203 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.203 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.203 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.203 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.469 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.730 00:21:27.730 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.730 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.730 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.991 { 00:21:27.991 "cntlid": 127, 00:21:27.991 "qid": 0, 00:21:27.991 "state": "enabled", 00:21:27.991 "thread": "nvmf_tgt_poll_group_000", 00:21:27.991 "listen_address": { 00:21:27.991 "trtype": "TCP", 00:21:27.991 "adrfam": "IPv4", 00:21:27.991 "traddr": "10.0.0.2", 00:21:27.991 "trsvcid": "4420" 00:21:27.991 }, 00:21:27.991 "peer_address": { 00:21:27.991 "trtype": "TCP", 00:21:27.991 "adrfam": "IPv4", 00:21:27.991 "traddr": "10.0.0.1", 00:21:27.991 "trsvcid": "43502" 00:21:27.991 }, 00:21:27.991 "auth": { 00:21:27.991 "state": "completed", 00:21:27.991 "digest": "sha512", 00:21:27.991 "dhgroup": "ffdhe4096" 00:21:27.991 } 00:21:27.991 } 00:21:27.991 ]' 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.991 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.251 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:21:28.823 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.083 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.653 00:21:29.653 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.653 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.653 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.913 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.913 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.913 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.913 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.913 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.913 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.913 { 00:21:29.913 "cntlid": 129, 00:21:29.913 "qid": 0, 00:21:29.913 "state": "enabled", 00:21:29.913 "thread": "nvmf_tgt_poll_group_000", 00:21:29.913 "listen_address": { 00:21:29.913 "trtype": "TCP", 00:21:29.913 "adrfam": "IPv4", 00:21:29.913 "traddr": "10.0.0.2", 00:21:29.913 "trsvcid": "4420" 00:21:29.913 }, 00:21:29.913 "peer_address": { 00:21:29.913 "trtype": "TCP", 00:21:29.913 "adrfam": "IPv4", 00:21:29.913 "traddr": "10.0.0.1", 00:21:29.913 "trsvcid": "59424" 00:21:29.913 }, 00:21:29.913 "auth": { 00:21:29.913 "state": "completed", 00:21:29.913 "digest": "sha512", 00:21:29.913 "dhgroup": "ffdhe6144" 00:21:29.913 } 00:21:29.913 } 00:21:29.913 ]' 00:21:29.913 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.914 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.914 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.914 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.914 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.914 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.914 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.914 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.173 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:21:30.742 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.742 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:30.742 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.742 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.742 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.742 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.742 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:30.742 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.002 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.573 00:21:31.573 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.573 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.573 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.573 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.573 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.573 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.573 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.573 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.573 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.573 { 00:21:31.573 "cntlid": 131, 00:21:31.573 "qid": 0, 00:21:31.573 "state": "enabled", 00:21:31.573 "thread": "nvmf_tgt_poll_group_000", 00:21:31.573 "listen_address": { 00:21:31.573 "trtype": "TCP", 00:21:31.573 "adrfam": "IPv4", 00:21:31.573 "traddr": "10.0.0.2", 00:21:31.573 "trsvcid": "4420" 00:21:31.573 }, 00:21:31.573 "peer_address": { 00:21:31.573 "trtype": "TCP", 00:21:31.573 "adrfam": "IPv4", 00:21:31.573 "traddr": "10.0.0.1", 00:21:31.573 "trsvcid": "59446" 00:21:31.573 }, 00:21:31.573 "auth": { 00:21:31.573 "state": "completed", 00:21:31.573 "digest": "sha512", 00:21:31.573 "dhgroup": "ffdhe6144" 00:21:31.573 } 00:21:31.573 } 00:21:31.573 ]' 00:21:31.573 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.834 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.834 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.834 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:31.834 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.834 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.834 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.834 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.095 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:21:32.665 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.665 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:32.665 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.665 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.665 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.665 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.665 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.665 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.926 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.185 00:21:33.445 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.445 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.445 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.445 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.445 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.445 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.445 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.445 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.445 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.445 { 00:21:33.445 "cntlid": 133, 00:21:33.445 "qid": 0, 00:21:33.445 "state": "enabled", 00:21:33.445 "thread": "nvmf_tgt_poll_group_000", 00:21:33.445 "listen_address": { 00:21:33.445 "trtype": "TCP", 00:21:33.445 "adrfam": "IPv4", 00:21:33.445 "traddr": "10.0.0.2", 00:21:33.445 "trsvcid": "4420" 00:21:33.445 }, 00:21:33.445 "peer_address": { 00:21:33.445 "trtype": "TCP", 00:21:33.445 "adrfam": "IPv4", 00:21:33.445 "traddr": "10.0.0.1", 00:21:33.445 "trsvcid": "59472" 00:21:33.445 }, 00:21:33.445 "auth": { 00:21:33.445 "state": "completed", 00:21:33.445 "digest": "sha512", 00:21:33.445 "dhgroup": "ffdhe6144" 00:21:33.445 } 00:21:33.445 } 00:21:33.445 ]' 00:21:33.445 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.705 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.705 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.705 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.705 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.705 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.705 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.705 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.966 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:21:34.537 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.537 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:34.537 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.537 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.537 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.537 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.537 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.537 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.798 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.058 00:21:35.317 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.317 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.317 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.317 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.317 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.317 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.317 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.317 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.318 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.318 { 00:21:35.318 "cntlid": 135, 00:21:35.318 "qid": 0, 00:21:35.318 "state": "enabled", 00:21:35.318 "thread": "nvmf_tgt_poll_group_000", 00:21:35.318 "listen_address": { 00:21:35.318 "trtype": "TCP", 00:21:35.318 "adrfam": "IPv4", 00:21:35.318 "traddr": "10.0.0.2", 00:21:35.318 "trsvcid": "4420" 00:21:35.318 }, 00:21:35.318 "peer_address": { 00:21:35.318 "trtype": "TCP", 00:21:35.318 "adrfam": "IPv4", 00:21:35.318 "traddr": "10.0.0.1", 00:21:35.318 "trsvcid": "59502" 00:21:35.318 }, 00:21:35.318 "auth": { 00:21:35.318 "state": "completed", 00:21:35.318 "digest": "sha512", 00:21:35.318 "dhgroup": "ffdhe6144" 00:21:35.318 } 00:21:35.318 } 00:21:35.318 ]' 00:21:35.318 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.318 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.318 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.577 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:35.577 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.577 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.577 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.577 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.838 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:21:36.409 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.409 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:36.409 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.409 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.409 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.409 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.409 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.409 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.409 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.669 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:36.669 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.669 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.669 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:36.669 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:36.670 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.670 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.670 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.670 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.670 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.670 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.670 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.241 00:21:37.241 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.241 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.241 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.502 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.502 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.502 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.502 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.502 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.502 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.502 { 00:21:37.502 "cntlid": 137, 00:21:37.502 "qid": 0, 00:21:37.502 "state": "enabled", 00:21:37.502 "thread": "nvmf_tgt_poll_group_000", 00:21:37.502 "listen_address": { 00:21:37.502 "trtype": "TCP", 00:21:37.502 "adrfam": "IPv4", 00:21:37.502 "traddr": "10.0.0.2", 00:21:37.502 "trsvcid": "4420" 00:21:37.502 }, 00:21:37.502 "peer_address": { 00:21:37.502 "trtype": "TCP", 00:21:37.502 "adrfam": "IPv4", 00:21:37.502 "traddr": "10.0.0.1", 00:21:37.502 "trsvcid": "59522" 00:21:37.502 }, 00:21:37.502 "auth": { 00:21:37.502 "state": "completed", 00:21:37.502 "digest": "sha512", 00:21:37.502 "dhgroup": "ffdhe8192" 00:21:37.502 } 00:21:37.502 } 00:21:37.502 ]' 00:21:37.502 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.502 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.502 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.502 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.502 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.763 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.763 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.763 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.763 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:21:38.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:38.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.702 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.271 00:21:39.532 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.532 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.532 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.532 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.532 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.532 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.532 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.532 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.532 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.532 { 00:21:39.532 "cntlid": 139, 00:21:39.532 "qid": 0, 00:21:39.532 "state": "enabled", 00:21:39.532 "thread": "nvmf_tgt_poll_group_000", 00:21:39.532 "listen_address": { 00:21:39.532 "trtype": "TCP", 00:21:39.532 "adrfam": "IPv4", 00:21:39.532 "traddr": "10.0.0.2", 00:21:39.532 "trsvcid": "4420" 00:21:39.532 }, 00:21:39.532 "peer_address": { 00:21:39.532 "trtype": "TCP", 00:21:39.532 "adrfam": "IPv4", 00:21:39.532 "traddr": "10.0.0.1", 00:21:39.532 "trsvcid": "59566" 00:21:39.532 }, 00:21:39.532 "auth": { 00:21:39.532 "state": "completed", 00:21:39.532 "digest": "sha512", 00:21:39.532 "dhgroup": "ffdhe8192" 00:21:39.532 } 00:21:39.532 } 00:21:39.532 ]' 00:21:39.532 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.792 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.792 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.792 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.792 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.792 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.792 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.792 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.052 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:01:NjM2OWMzOTk4NTYyMTJiY2U1NTk2YWU1YjFhZjQ2ZDgDqmFf: --dhchap-ctrl-secret DHHC-1:02:MzM4NGMxNjNiMzBlODNiNGVjYzVhYmVlYWMzNGQ2M2IzOGUxOTY1NGY2ZGEyNGVh5erBjw==: 00:21:40.620 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.620 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:40.620 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.620 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.620 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.620 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.620 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.620 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.880 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.449 00:21:41.450 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.450 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.450 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.709 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.709 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.709 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.709 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.709 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.709 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.709 { 00:21:41.709 "cntlid": 141, 00:21:41.709 "qid": 0, 00:21:41.709 "state": "enabled", 00:21:41.709 "thread": "nvmf_tgt_poll_group_000", 00:21:41.709 "listen_address": { 00:21:41.709 "trtype": "TCP", 00:21:41.709 "adrfam": "IPv4", 00:21:41.709 "traddr": "10.0.0.2", 00:21:41.709 "trsvcid": "4420" 00:21:41.709 }, 00:21:41.709 "peer_address": { 00:21:41.709 "trtype": "TCP", 00:21:41.709 "adrfam": "IPv4", 00:21:41.709 "traddr": "10.0.0.1", 00:21:41.709 "trsvcid": "48412" 00:21:41.709 }, 00:21:41.709 "auth": { 00:21:41.709 "state": "completed", 00:21:41.709 "digest": "sha512", 00:21:41.709 "dhgroup": "ffdhe8192" 00:21:41.710 } 00:21:41.710 } 00:21:41.710 ]' 00:21:41.710 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.710 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.710 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.968 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:41.968 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.969 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.969 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.969 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.227 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:02:MjI0N2I1YTgyM2Y3ZDBmMDA4NTU1ZmMyMmVhOTUzNTcyNDk3ZGVlODc0MWFjMTBjixyOfg==: --dhchap-ctrl-secret DHHC-1:01:OGViMTEwMjAxNzc4ZGY0YTlmNDRlMmQ1NGNjMDE3MmNwlRwW: 00:21:42.796 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.796 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:42.796 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.796 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.796 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.796 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.796 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.796 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.056 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.625 00:21:43.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.885 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.885 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.885 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.885 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.885 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.885 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.885 { 00:21:43.885 "cntlid": 143, 00:21:43.885 "qid": 0, 00:21:43.885 "state": "enabled", 00:21:43.885 "thread": "nvmf_tgt_poll_group_000", 00:21:43.885 "listen_address": { 00:21:43.885 "trtype": "TCP", 00:21:43.885 "adrfam": "IPv4", 00:21:43.885 "traddr": "10.0.0.2", 00:21:43.885 "trsvcid": "4420" 00:21:43.885 }, 00:21:43.885 "peer_address": { 00:21:43.885 "trtype": "TCP", 00:21:43.885 "adrfam": "IPv4", 00:21:43.885 "traddr": "10.0.0.1", 00:21:43.885 "trsvcid": "48428" 00:21:43.885 }, 00:21:43.885 "auth": { 00:21:43.885 "state": "completed", 00:21:43.885 "digest": "sha512", 00:21:43.885 "dhgroup": "ffdhe8192" 00:21:43.885 } 00:21:43.885 } 00:21:43.885 ]' 00:21:43.885 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.885 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.885 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.885 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.885 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.144 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.145 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.145 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.145 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.083 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.022 00:21:46.022 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.022 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.022 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.022 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.022 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.022 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.022 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.022 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.022 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.022 { 00:21:46.022 "cntlid": 145, 00:21:46.022 "qid": 0, 00:21:46.022 "state": "enabled", 00:21:46.022 "thread": "nvmf_tgt_poll_group_000", 00:21:46.022 "listen_address": { 00:21:46.022 "trtype": "TCP", 00:21:46.022 "adrfam": "IPv4", 00:21:46.022 "traddr": "10.0.0.2", 00:21:46.022 "trsvcid": "4420" 00:21:46.022 }, 00:21:46.022 "peer_address": { 00:21:46.023 "trtype": "TCP", 00:21:46.023 "adrfam": "IPv4", 00:21:46.023 "traddr": "10.0.0.1", 00:21:46.023 "trsvcid": "48462" 00:21:46.023 }, 00:21:46.023 "auth": { 00:21:46.023 "state": "completed", 00:21:46.023 "digest": "sha512", 00:21:46.023 "dhgroup": "ffdhe8192" 00:21:46.023 } 00:21:46.023 } 00:21:46.023 ]' 00:21:46.023 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.023 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.023 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.023 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.023 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.282 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.282 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.282 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.282 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:00:YWVkZjIyYjhhYjU2ODJjNTcyYTg5ZmE4NGNkNWNlN2ZhZDU3NGRlMjAzYzIwZDgxLSA8ig==: --dhchap-ctrl-secret DHHC-1:03:YmEwNmI2Njc1MjFmNjk2MGU2YTFhZDMzOTQwYzg2ZTA3ZWVlODg1MzJjNjYwZWE5ZWFkNWI1OWE4Y2I0NGIyZlnyLH8=: 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:47.221 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:47.791 request: 00:21:47.791 { 00:21:47.791 "name": "nvme0", 00:21:47.791 "trtype": "tcp", 00:21:47.791 "traddr": "10.0.0.2", 00:21:47.791 "adrfam": "ipv4", 00:21:47.791 "trsvcid": "4420", 00:21:47.791 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:47.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:47.791 "prchk_reftag": false, 00:21:47.791 "prchk_guard": false, 00:21:47.791 "hdgst": false, 00:21:47.791 "ddgst": false, 00:21:47.791 "dhchap_key": "key2", 00:21:47.791 "method": "bdev_nvme_attach_controller", 00:21:47.791 "req_id": 1 00:21:47.791 } 00:21:47.791 Got JSON-RPC error response 00:21:47.791 response: 00:21:47.791 { 00:21:47.791 "code": -5, 00:21:47.791 "message": "Input/output error" 00:21:47.791 } 00:21:47.791 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:47.791 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:47.791 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:47.791 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:47.791 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:47.791 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.791 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:47.791 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:48.361 request: 00:21:48.361 { 00:21:48.361 "name": "nvme0", 00:21:48.361 "trtype": "tcp", 00:21:48.361 "traddr": "10.0.0.2", 00:21:48.361 "adrfam": "ipv4", 00:21:48.361 "trsvcid": "4420", 00:21:48.361 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:48.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:48.361 "prchk_reftag": false, 00:21:48.361 "prchk_guard": false, 00:21:48.361 "hdgst": false, 00:21:48.362 "ddgst": false, 00:21:48.362 "dhchap_key": "key1", 00:21:48.362 "dhchap_ctrlr_key": "ckey2", 00:21:48.362 "method": "bdev_nvme_attach_controller", 00:21:48.362 "req_id": 1 00:21:48.362 } 00:21:48.362 Got JSON-RPC error response 00:21:48.362 response: 00:21:48.362 { 00:21:48.362 "code": -5, 00:21:48.362 "message": "Input/output error" 00:21:48.362 } 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.362 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.932 request: 00:21:48.932 { 00:21:48.932 "name": "nvme0", 00:21:48.932 "trtype": "tcp", 00:21:48.932 "traddr": "10.0.0.2", 00:21:48.932 "adrfam": "ipv4", 00:21:48.932 "trsvcid": "4420", 00:21:48.932 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:48.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:48.932 "prchk_reftag": false, 00:21:48.932 "prchk_guard": false, 00:21:48.932 "hdgst": false, 00:21:48.932 "ddgst": false, 00:21:48.932 "dhchap_key": "key1", 00:21:48.932 "dhchap_ctrlr_key": "ckey1", 00:21:48.932 "method": "bdev_nvme_attach_controller", 00:21:48.932 "req_id": 1 00:21:48.932 } 00:21:48.932 Got JSON-RPC error response 00:21:48.932 response: 00:21:48.932 { 00:21:48.932 "code": -5, 00:21:48.932 "message": "Input/output error" 00:21:48.932 } 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 425650 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 425650 ']' 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 425650 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.932 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 425650 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 425650' 00:21:49.192 killing process with pid 425650 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 425650 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 425650 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=451250 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 451250 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 451250 ']' 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.192 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 451250 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 451250 ']' 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.132 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.391 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.391 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:50.391 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:50.391 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.391 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.391 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.391 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:50.392 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.392 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.392 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:50.392 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:50.392 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.392 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:21:50.392 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.392 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.392 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.392 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.392 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:51.332 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.332 { 00:21:51.332 "cntlid": 1, 00:21:51.332 "qid": 0, 00:21:51.332 "state": "enabled", 00:21:51.332 "thread": "nvmf_tgt_poll_group_000", 00:21:51.332 "listen_address": { 00:21:51.332 "trtype": "TCP", 00:21:51.332 "adrfam": "IPv4", 00:21:51.332 "traddr": "10.0.0.2", 00:21:51.332 "trsvcid": "4420" 00:21:51.332 }, 00:21:51.332 "peer_address": { 00:21:51.332 "trtype": "TCP", 00:21:51.332 "adrfam": "IPv4", 00:21:51.332 "traddr": "10.0.0.1", 00:21:51.332 "trsvcid": "56604" 00:21:51.332 }, 00:21:51.332 "auth": { 00:21:51.332 "state": "completed", 00:21:51.332 "digest": "sha512", 00:21:51.332 "dhgroup": "ffdhe8192" 00:21:51.332 } 00:21:51.332 } 00:21:51.332 ]' 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.332 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.592 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:51.592 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.592 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.592 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.592 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.852 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-secret DHHC-1:03:NWIwM2M0ZjJlYjQyMmI4Nzg2NWEyM2NlOWJjOTYxYjBiYmFkYWUyMzFiMzNjZjEwNDg2ZjQwNzcyZmVmMjNjYmX1XRo=: 00:21:52.459 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.459 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:52.459 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.459 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.459 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.459 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:21:52.459 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.459 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.459 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.460 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:52.460 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:52.726 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.726 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:52.726 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.726 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:52.726 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.726 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:52.726 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.726 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.726 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.987 request: 00:21:52.987 { 00:21:52.987 "name": "nvme0", 00:21:52.987 "trtype": "tcp", 00:21:52.987 "traddr": "10.0.0.2", 00:21:52.987 "adrfam": "ipv4", 00:21:52.987 "trsvcid": "4420", 00:21:52.987 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:52.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:52.987 "prchk_reftag": false, 00:21:52.987 "prchk_guard": false, 00:21:52.987 "hdgst": false, 00:21:52.987 "ddgst": false, 00:21:52.987 "dhchap_key": "key3", 00:21:52.987 "method": "bdev_nvme_attach_controller", 00:21:52.987 "req_id": 1 00:21:52.987 } 00:21:52.987 Got JSON-RPC error response 00:21:52.987 response: 00:21:52.987 { 00:21:52.987 "code": -5, 00:21:52.987 "message": "Input/output error" 00:21:52.987 } 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.987 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.247 request: 00:21:53.247 { 00:21:53.247 "name": "nvme0", 00:21:53.247 "trtype": "tcp", 00:21:53.247 "traddr": "10.0.0.2", 00:21:53.247 "adrfam": "ipv4", 00:21:53.247 "trsvcid": "4420", 00:21:53.247 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:53.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:53.247 "prchk_reftag": false, 00:21:53.247 "prchk_guard": false, 00:21:53.247 "hdgst": false, 00:21:53.247 "ddgst": false, 00:21:53.247 "dhchap_key": "key3", 00:21:53.247 "method": "bdev_nvme_attach_controller", 00:21:53.247 "req_id": 1 00:21:53.247 } 00:21:53.247 Got JSON-RPC error response 00:21:53.247 response: 00:21:53.247 { 00:21:53.247 "code": -5, 00:21:53.247 "message": "Input/output error" 00:21:53.247 } 00:21:53.247 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:53.247 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.247 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.247 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.247 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:53.247 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:53.247 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:53.247 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:53.247 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:53.247 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:53.507 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:53.766 request: 00:21:53.766 { 00:21:53.766 "name": "nvme0", 00:21:53.766 "trtype": "tcp", 00:21:53.766 "traddr": "10.0.0.2", 00:21:53.766 "adrfam": "ipv4", 00:21:53.766 "trsvcid": "4420", 00:21:53.766 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:53.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:21:53.766 "prchk_reftag": false, 00:21:53.766 "prchk_guard": false, 00:21:53.766 "hdgst": false, 00:21:53.766 "ddgst": false, 00:21:53.766 "dhchap_key": "key0", 00:21:53.766 "dhchap_ctrlr_key": "key1", 00:21:53.766 "method": "bdev_nvme_attach_controller", 00:21:53.766 "req_id": 1 00:21:53.766 } 00:21:53.766 Got JSON-RPC error response 00:21:53.766 response: 00:21:53.766 { 00:21:53.766 "code": -5, 00:21:53.766 "message": "Input/output error" 00:21:53.766 } 00:21:53.766 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:53.766 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.766 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.766 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.766 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:53.766 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:54.026 00:21:54.026 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:54.026 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.026 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:54.286 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.286 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.286 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.286 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:54.286 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:54.286 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 425699 00:21:54.286 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 425699 ']' 00:21:54.286 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 425699 00:21:54.286 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:54.546 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.546 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 425699 00:21:54.546 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:54.546 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:54.546 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 425699' 00:21:54.546 killing process with pid 425699 00:21:54.546 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 425699 00:21:54.546 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 425699 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.806 rmmod nvme_tcp 00:21:54.806 rmmod nvme_fabrics 00:21:54.806 rmmod nvme_keyring 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 451250 ']' 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 451250 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 451250 ']' 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 451250 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 451250 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 451250' 00:21:54.806 killing process with pid 451250 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 451250 00:21:54.806 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 451250 00:21:55.068 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.068 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.068 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.068 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.068 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.068 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.068 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.068 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.981 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:56.981 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.WSU /tmp/spdk.key-sha256.KF8 /tmp/spdk.key-sha384.dxp /tmp/spdk.key-sha512.r81 /tmp/spdk.key-sha512.ZL3 /tmp/spdk.key-sha384.IrM /tmp/spdk.key-sha256.jzI '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:56.981 00:21:56.981 real 2m41.896s 00:21:56.981 user 6m8.231s 00:21:56.981 sys 0m22.458s 00:21:56.981 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:56.981 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.981 ************************************ 00:21:56.981 END TEST nvmf_auth_target 00:21:56.981 ************************************ 00:21:56.981 12:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:56.981 12:34:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:56.981 12:34:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:56.981 12:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:56.981 12:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.981 12:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:57.242 ************************************ 00:21:57.242 START TEST nvmf_bdevio_no_huge 00:21:57.242 ************************************ 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:57.242 * Looking for test storage... 00:21:57.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.242 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:05.385 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:05.385 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:05.385 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:05.385 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.385 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.386 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.386 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:22:05.647 00:22:05.647 --- 10.0.0.2 ping statistics --- 00:22:05.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.647 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:22:05.647 00:22:05.647 --- 10.0.0.1 ping statistics --- 00:22:05.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.647 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=456446 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 456446 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 456446 ']' 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.647 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:05.647 [2024-07-25 12:34:38.968266] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:22:05.647 [2024-07-25 12:34:38.968333] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:05.908 [2024-07-25 12:34:39.092415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.168 [2024-07-25 12:34:39.356683] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.168 [2024-07-25 12:34:39.356767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.168 [2024-07-25 12:34:39.356796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.168 [2024-07-25 12:34:39.356819] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.168 [2024-07-25 12:34:39.356838] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.168 [2024-07-25 12:34:39.357023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:06.168 [2024-07-25 12:34:39.357177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:06.168 [2024-07-25 12:34:39.357332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:06.168 [2024-07-25 12:34:39.357338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.168 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.168 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:06.168 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.168 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.168 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.429 [2024-07-25 12:34:39.600294] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.429 Malloc0 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.429 [2024-07-25 12:34:39.663884] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:06.429 { 00:22:06.429 "params": { 00:22:06.429 "name": "Nvme$subsystem", 00:22:06.429 "trtype": "$TEST_TRANSPORT", 00:22:06.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.429 "adrfam": "ipv4", 00:22:06.429 "trsvcid": "$NVMF_PORT", 00:22:06.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.429 "hdgst": ${hdgst:-false}, 00:22:06.429 "ddgst": ${ddgst:-false} 00:22:06.429 }, 00:22:06.429 "method": "bdev_nvme_attach_controller" 00:22:06.429 } 00:22:06.429 EOF 00:22:06.429 )") 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:06.429 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:06.429 "params": { 00:22:06.429 "name": "Nvme1", 00:22:06.429 "trtype": "tcp", 00:22:06.429 "traddr": "10.0.0.2", 00:22:06.429 "adrfam": "ipv4", 00:22:06.429 "trsvcid": "4420", 00:22:06.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:06.429 "hdgst": false, 00:22:06.429 "ddgst": false 00:22:06.429 }, 00:22:06.429 "method": "bdev_nvme_attach_controller" 00:22:06.429 }' 00:22:06.429 [2024-07-25 12:34:39.719220] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:22:06.429 [2024-07-25 12:34:39.719289] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid456730 ] 00:22:06.429 [2024-07-25 12:34:39.809570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:06.691 [2024-07-25 12:34:39.910396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.691 [2024-07-25 12:34:39.910562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.691 [2024-07-25 12:34:39.910568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.951 I/O targets: 00:22:06.951 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:06.951 00:22:06.951 00:22:06.951 CUnit - A unit testing framework for C - Version 2.1-3 00:22:06.951 http://cunit.sourceforge.net/ 00:22:06.951 00:22:06.951 00:22:06.951 Suite: bdevio tests on: Nvme1n1 00:22:06.951 Test: blockdev write read block ...passed 00:22:06.951 Test: blockdev write zeroes read block ...passed 00:22:06.951 Test: blockdev write zeroes read no split ...passed 00:22:06.951 Test: blockdev write zeroes read split ...passed 00:22:06.951 Test: blockdev write zeroes read split partial ...passed 00:22:06.951 Test: blockdev reset ...[2024-07-25 12:34:40.327885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.951 [2024-07-25 12:34:40.327991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b9970 (9): Bad file descriptor 00:22:07.212 [2024-07-25 12:34:40.384858] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:07.212 passed 00:22:07.212 Test: blockdev write read 8 blocks ...passed 00:22:07.212 Test: blockdev write read size > 128k ...passed 00:22:07.212 Test: blockdev write read invalid size ...passed 00:22:07.212 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:07.212 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:07.212 Test: blockdev write read max offset ...passed 00:22:07.212 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:07.212 Test: blockdev writev readv 8 blocks ...passed 00:22:07.212 Test: blockdev writev readv 30 x 1block ...passed 00:22:07.473 Test: blockdev writev readv block ...passed 00:22:07.473 Test: blockdev writev readv size > 128k ...passed 00:22:07.473 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:07.473 Test: blockdev comparev and writev ...[2024-07-25 12:34:40.651012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.473 [2024-07-25 12:34:40.651099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:07.473 [2024-07-25 12:34:40.651145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.473 [2024-07-25 12:34:40.651169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:07.473 [2024-07-25 12:34:40.651971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.473 [2024-07-25 12:34:40.652008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:07.473 [2024-07-25 12:34:40.652049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.473 [2024-07-25 12:34:40.652071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:07.473 [2024-07-25 12:34:40.652863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.473 [2024-07-25 12:34:40.652897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:07.473 [2024-07-25 12:34:40.652937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.473 [2024-07-25 12:34:40.652959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:07.474 [2024-07-25 12:34:40.653700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.474 [2024-07-25 12:34:40.653734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:07.474 [2024-07-25 12:34:40.653773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.474 [2024-07-25 12:34:40.653795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:07.474 passed 00:22:07.474 Test: blockdev nvme passthru rw ...passed 00:22:07.474 Test: blockdev nvme passthru vendor specific ...[2024-07-25 12:34:40.737331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.474 [2024-07-25 12:34:40.737371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:07.474 [2024-07-25 12:34:40.737790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.474 [2024-07-25 12:34:40.737823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:07.474 [2024-07-25 12:34:40.738239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.474 [2024-07-25 12:34:40.738271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:07.474 [2024-07-25 12:34:40.738664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.474 [2024-07-25 12:34:40.738697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:07.474 passed 00:22:07.474 Test: blockdev nvme admin passthru ...passed 00:22:07.474 Test: blockdev copy ...passed 00:22:07.474 00:22:07.474 Run Summary: Type Total Ran Passed Failed Inactive 00:22:07.474 suites 1 1 n/a 0 0 00:22:07.474 tests 23 23 23 0 0 00:22:07.474 asserts 152 152 152 0 n/a 00:22:07.474 00:22:07.474 Elapsed time = 1.356 seconds 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:07.735 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:07.735 rmmod nvme_tcp 00:22:07.995 rmmod nvme_fabrics 00:22:07.995 rmmod nvme_keyring 00:22:07.995 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:07.995 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:07.995 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 456446 ']' 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 456446 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 456446 ']' 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 456446 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 456446 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 456446' 00:22:07.996 killing process with pid 456446 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 456446 00:22:07.996 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 456446 00:22:08.936 12:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:08.936 12:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:08.936 12:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:08.936 12:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.936 12:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:08.936 12:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.936 12:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.936 12:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.851 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:10.851 00:22:10.851 real 0m13.700s 00:22:10.851 user 0m14.377s 00:22:10.851 sys 0m7.764s 00:22:10.851 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:10.851 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.851 ************************************ 00:22:10.851 END TEST nvmf_bdevio_no_huge 00:22:10.851 ************************************ 00:22:10.851 12:34:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:22:10.851 12:34:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:10.851 12:34:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:10.851 12:34:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:10.851 12:34:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:10.851 ************************************ 00:22:10.851 START TEST nvmf_tls 00:22:10.851 ************************************ 00:22:10.851 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:11.112 * Looking for test storage... 00:22:11.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:11.112 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.112 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:11.112 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.112 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.112 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.113 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:19.268 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:19.268 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:19.268 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:19.268 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.268 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:22:19.529 00:22:19.529 --- 10.0.0.2 ping statistics --- 00:22:19.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.529 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:22:19.529 00:22:19.529 --- 10.0.0.1 ping statistics --- 00:22:19.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.529 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=461408 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 461408 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 461408 ']' 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.529 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.529 [2024-07-25 12:34:52.853346] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:22:19.529 [2024-07-25 12:34:52.853407] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.529 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.529 [2024-07-25 12:34:52.943954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.788 [2024-07-25 12:34:53.051517] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.788 [2024-07-25 12:34:53.051586] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.788 [2024-07-25 12:34:53.051597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.788 [2024-07-25 12:34:53.051607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.788 [2024-07-25 12:34:53.051615] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.788 [2024-07-25 12:34:53.051652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.358 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.358 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:20.358 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.358 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:20.358 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.358 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.358 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:20.358 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:20.619 true 00:22:20.619 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:20.619 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:20.879 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:20.879 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:20.879 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:21.140 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.140 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:21.401 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:21.401 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:21.401 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:21.401 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.401 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:21.660 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:21.660 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:21.660 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.660 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:21.920 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:21.920 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:21.920 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:22.180 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.180 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:22.440 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:22.440 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:22.440 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:22.440 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.440 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:22.701 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:22.961 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:22.961 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:22.961 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.RKLlxSiFGD 00:22:22.961 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:22.961 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.mcSdMJIagv 00:22:22.961 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:22.961 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:22.961 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.RKLlxSiFGD 00:22:22.961 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.mcSdMJIagv 00:22:22.961 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:23.222 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:23.482 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.RKLlxSiFGD 00:22:23.482 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RKLlxSiFGD 00:22:23.482 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:23.482 [2024-07-25 12:34:56.875240] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.482 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:23.743 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:24.004 [2024-07-25 12:34:57.304346] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:24.004 [2024-07-25 12:34:57.304690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.004 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:24.355 malloc0 00:22:24.355 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:24.356 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RKLlxSiFGD 00:22:24.616 [2024-07-25 12:34:57.929165] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:24.616 12:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RKLlxSiFGD 00:22:24.616 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.850 Initializing NVMe Controllers 00:22:36.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.850 Initialization complete. Launching workers. 00:22:36.850 ======================================================== 00:22:36.850 Latency(us) 00:22:36.850 Device Information : IOPS MiB/s Average min max 00:22:36.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9434.70 36.85 6785.25 997.45 7497.65 00:22:36.850 ======================================================== 00:22:36.850 Total : 9434.70 36.85 6785.25 997.45 7497.65 00:22:36.850 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RKLlxSiFGD 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RKLlxSiFGD' 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=464007 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 464007 /var/tmp/bdevperf.sock 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 464007 ']' 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.850 [2024-07-25 12:35:08.137845] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:22:36.850 [2024-07-25 12:35:08.137901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464007 ] 00:22:36.850 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.850 [2024-07-25 12:35:08.266381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.850 [2024-07-25 12:35:08.429344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:36.850 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RKLlxSiFGD 00:22:36.850 [2024-07-25 12:35:09.186425] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.850 [2024-07-25 12:35:09.186618] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:36.850 TLSTESTn1 00:22:36.850 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:36.850 Running I/O for 10 seconds... 00:22:46.852 00:22:46.852 Latency(us) 00:22:46.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.852 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:46.852 Verification LBA range: start 0x0 length 0x2000 00:22:46.853 TLSTESTn1 : 10.04 2003.42 7.83 0.00 0.00 63716.94 16535.24 66544.25 00:22:46.853 =================================================================================================================== 00:22:46.853 Total : 2003.42 7.83 0.00 0.00 63716.94 16535.24 66544.25 00:22:46.853 0 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 464007 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 464007 ']' 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 464007 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 464007 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 464007' 00:22:46.853 killing process with pid 464007 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 464007 00:22:46.853 Received shutdown signal, test time was about 10.000000 seconds 00:22:46.853 00:22:46.853 Latency(us) 00:22:46.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.853 =================================================================================================================== 00:22:46.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.853 [2024-07-25 12:35:19.562327] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 464007 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mcSdMJIagv 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mcSdMJIagv 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mcSdMJIagv 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mcSdMJIagv' 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=465868 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 465868 /var/tmp/bdevperf.sock 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 465868 ']' 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.853 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.853 [2024-07-25 12:35:19.913904] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:22:46.853 [2024-07-25 12:35:19.913980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465868 ] 00:22:46.853 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.853 [2024-07-25 12:35:20.046670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.853 [2024-07-25 12:35:20.211304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.424 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.424 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:47.424 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mcSdMJIagv 00:22:47.683 [2024-07-25 12:35:20.988243] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.683 [2024-07-25 12:35:20.988419] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:47.683 [2024-07-25 12:35:20.997722] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:47.683 [2024-07-25 12:35:20.997845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b6830 (107): Transport endpoint is not connected 00:22:47.683 [2024-07-25 12:35:20.998820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b6830 (9): Bad file descriptor 00:22:47.683 [2024-07-25 12:35:20.999819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:47.683 [2024-07-25 12:35:20.999849] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:47.683 [2024-07-25 12:35:20.999878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:47.683 request: 00:22:47.683 { 00:22:47.683 "name": "TLSTEST", 00:22:47.683 "trtype": "tcp", 00:22:47.683 "traddr": "10.0.0.2", 00:22:47.683 "adrfam": "ipv4", 00:22:47.683 "trsvcid": "4420", 00:22:47.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.683 "prchk_reftag": false, 00:22:47.683 "prchk_guard": false, 00:22:47.683 "hdgst": false, 00:22:47.683 "ddgst": false, 00:22:47.683 "psk": "/tmp/tmp.mcSdMJIagv", 00:22:47.683 "method": "bdev_nvme_attach_controller", 00:22:47.683 "req_id": 1 00:22:47.683 } 00:22:47.683 Got JSON-RPC error response 00:22:47.683 response: 00:22:47.683 { 00:22:47.683 "code": -5, 00:22:47.683 "message": "Input/output error" 00:22:47.683 } 00:22:47.683 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 465868 00:22:47.683 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 465868 ']' 00:22:47.683 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 465868 00:22:47.683 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:47.683 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.683 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 465868 00:22:47.683 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:47.683 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:47.683 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 465868' 00:22:47.683 killing process with pid 465868 00:22:47.683 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 465868 00:22:47.683 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.683 00:22:47.683 Latency(us) 00:22:47.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.683 =================================================================================================================== 00:22:47.683 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:47.683 [2024-07-25 12:35:21.093568] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:47.683 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 465868 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RKLlxSiFGD 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RKLlxSiFGD 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RKLlxSiFGD 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RKLlxSiFGD' 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=466103 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 466103 /var/tmp/bdevperf.sock 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 466103 ']' 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.255 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.255 [2024-07-25 12:35:21.437860] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:22:48.255 [2024-07-25 12:35:21.437937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466103 ] 00:22:48.255 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.255 [2024-07-25 12:35:21.570412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.516 [2024-07-25 12:35:21.730491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.087 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.087 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:49.087 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.RKLlxSiFGD 00:22:49.087 [2024-07-25 12:35:22.495304] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.087 [2024-07-25 12:35:22.495475] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:49.348 [2024-07-25 12:35:22.508771] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:49.348 [2024-07-25 12:35:22.508811] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:49.348 [2024-07-25 12:35:22.508851] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.348 [2024-07-25 12:35:22.508965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed4830 (107): Transport endpoint is not connected 00:22:49.348 [2024-07-25 12:35:22.509913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed4830 (9): Bad file descriptor 00:22:49.348 [2024-07-25 12:35:22.510910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.348 [2024-07-25 12:35:22.510937] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.348 [2024-07-25 12:35:22.510964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.348 request: 00:22:49.348 { 00:22:49.348 "name": "TLSTEST", 00:22:49.348 "trtype": "tcp", 00:22:49.348 "traddr": "10.0.0.2", 00:22:49.348 "adrfam": "ipv4", 00:22:49.348 "trsvcid": "4420", 00:22:49.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.348 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:49.348 "prchk_reftag": false, 00:22:49.348 "prchk_guard": false, 00:22:49.348 "hdgst": false, 00:22:49.348 "ddgst": false, 00:22:49.348 "psk": "/tmp/tmp.RKLlxSiFGD", 00:22:49.348 "method": "bdev_nvme_attach_controller", 00:22:49.348 "req_id": 1 00:22:49.348 } 00:22:49.348 Got JSON-RPC error response 00:22:49.348 response: 00:22:49.348 { 00:22:49.348 "code": -5, 00:22:49.348 "message": "Input/output error" 00:22:49.348 } 00:22:49.348 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 466103 00:22:49.348 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 466103 ']' 00:22:49.348 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 466103 00:22:49.348 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:49.348 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:49.348 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 466103 00:22:49.348 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:49.348 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:49.348 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 466103' 00:22:49.348 killing process with pid 466103 00:22:49.348 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 466103 00:22:49.348 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.348 00:22:49.348 Latency(us) 00:22:49.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.348 =================================================================================================================== 00:22:49.348 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.348 [2024-07-25 12:35:22.589733] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:49.348 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 466103 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RKLlxSiFGD 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RKLlxSiFGD 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RKLlxSiFGD 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RKLlxSiFGD' 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=466231 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 466231 /var/tmp/bdevperf.sock 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 466231 ']' 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:49.613 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.613 [2024-07-25 12:35:22.933700] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:22:49.613 [2024-07-25 12:35:22.933771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466231 ] 00:22:49.613 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.877 [2024-07-25 12:35:23.069188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.877 [2024-07-25 12:35:23.230823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.448 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.448 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:50.448 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RKLlxSiFGD 00:22:50.708 [2024-07-25 12:35:24.008577] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:50.708 [2024-07-25 12:35:24.008763] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:50.708 [2024-07-25 12:35:24.022588] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:50.708 [2024-07-25 12:35:24.022625] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:50.708 [2024-07-25 12:35:24.022665] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:50.708 [2024-07-25 12:35:24.023467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb830 (107): Transport endpoint is not connected 00:22:50.708 [2024-07-25 12:35:24.024444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb830 (9): Bad file descriptor 00:22:50.708 [2024-07-25 12:35:24.025444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:50.708 [2024-07-25 12:35:24.025470] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:50.708 [2024-07-25 12:35:24.025496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:50.708 request: 00:22:50.708 { 00:22:50.708 "name": "TLSTEST", 00:22:50.708 "trtype": "tcp", 00:22:50.708 "traddr": "10.0.0.2", 00:22:50.708 "adrfam": "ipv4", 00:22:50.708 "trsvcid": "4420", 00:22:50.708 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.708 "prchk_reftag": false, 00:22:50.708 "prchk_guard": false, 00:22:50.708 "hdgst": false, 00:22:50.708 "ddgst": false, 00:22:50.708 "psk": "/tmp/tmp.RKLlxSiFGD", 00:22:50.708 "method": "bdev_nvme_attach_controller", 00:22:50.708 "req_id": 1 00:22:50.708 } 00:22:50.708 Got JSON-RPC error response 00:22:50.708 response: 00:22:50.708 { 00:22:50.708 "code": -5, 00:22:50.708 "message": "Input/output error" 00:22:50.708 } 00:22:50.708 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 466231 00:22:50.708 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 466231 ']' 00:22:50.708 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 466231 00:22:50.708 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:50.708 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.708 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 466231 00:22:50.708 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:50.708 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:50.708 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 466231' 00:22:50.708 killing process with pid 466231 00:22:50.708 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 466231 00:22:50.708 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.708 00:22:50.708 Latency(us) 00:22:50.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.708 =================================================================================================================== 00:22:50.708 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:50.708 [2024-07-25 12:35:24.121554] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:50.708 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 466231 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=466518 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 466518 /var/tmp/bdevperf.sock 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 466518 ']' 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.279 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.279 [2024-07-25 12:35:24.466619] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:22:51.279 [2024-07-25 12:35:24.466689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466518 ] 00:22:51.279 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.279 [2024-07-25 12:35:24.602876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.539 [2024-07-25 12:35:24.763120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.110 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.110 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:52.110 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:52.407 [2024-07-25 12:35:25.547816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:52.407 [2024-07-25 12:35:25.549626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa10090 (9): Bad file descriptor 00:22:52.407 [2024-07-25 12:35:25.550618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:52.407 [2024-07-25 12:35:25.550649] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:52.407 [2024-07-25 12:35:25.550676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.407 request: 00:22:52.407 { 00:22:52.407 "name": "TLSTEST", 00:22:52.407 "trtype": "tcp", 00:22:52.407 "traddr": "10.0.0.2", 00:22:52.407 "adrfam": "ipv4", 00:22:52.407 "trsvcid": "4420", 00:22:52.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.407 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.407 "prchk_reftag": false, 00:22:52.407 "prchk_guard": false, 00:22:52.407 "hdgst": false, 00:22:52.407 "ddgst": false, 00:22:52.407 "method": "bdev_nvme_attach_controller", 00:22:52.407 "req_id": 1 00:22:52.407 } 00:22:52.407 Got JSON-RPC error response 00:22:52.407 response: 00:22:52.407 { 00:22:52.407 "code": -5, 00:22:52.407 "message": "Input/output error" 00:22:52.407 } 00:22:52.407 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 466518 00:22:52.407 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 466518 ']' 00:22:52.407 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 466518 00:22:52.407 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:52.407 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.407 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 466518 00:22:52.407 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:52.407 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:52.407 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 466518' 00:22:52.407 killing process with pid 466518 00:22:52.407 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 466518 00:22:52.407 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.407 00:22:52.407 Latency(us) 00:22:52.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.407 =================================================================================================================== 00:22:52.407 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:52.407 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 466518 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 461408 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 461408 ']' 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 461408 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 461408 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 461408' 00:22:52.668 killing process with pid 461408 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 461408 00:22:52.668 [2024-07-25 12:35:25.988379] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:52.668 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 461408 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.gFI8vgMpPO 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.gFI8vgMpPO 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=466839 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 466839 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 466839 ']' 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.930 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.930 [2024-07-25 12:35:26.326858] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:22:52.930 [2024-07-25 12:35:26.326930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.191 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.191 [2024-07-25 12:35:26.415364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.191 [2024-07-25 12:35:26.521416] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.191 [2024-07-25 12:35:26.521480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.191 [2024-07-25 12:35:26.521491] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.191 [2024-07-25 12:35:26.521500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.191 [2024-07-25 12:35:26.521508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.191 [2024-07-25 12:35:26.521538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.134 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.134 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:54.134 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.134 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:54.134 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.134 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.134 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.gFI8vgMpPO 00:22:54.134 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gFI8vgMpPO 00:22:54.134 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:54.134 [2024-07-25 12:35:27.424222] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.134 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:54.395 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:54.656 [2024-07-25 12:35:27.837297] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:54.656 [2024-07-25 12:35:27.837636] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.656 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:54.656 malloc0 00:22:54.656 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:54.916 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gFI8vgMpPO 00:22:55.175 [2024-07-25 12:35:28.462241] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFI8vgMpPO 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gFI8vgMpPO' 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=467194 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 467194 /var/tmp/bdevperf.sock 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 467194 ']' 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.175 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.175 [2024-07-25 12:35:28.531754] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:22:55.175 [2024-07-25 12:35:28.531822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467194 ] 00:22:55.175 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.435 [2024-07-25 12:35:28.665328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.435 [2024-07-25 12:35:28.827024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.005 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.005 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:56.005 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gFI8vgMpPO 00:22:56.265 [2024-07-25 12:35:29.588284] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.265 [2024-07-25 12:35:29.588457] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:56.265 TLSTESTn1 00:22:56.526 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:56.526 Running I/O for 10 seconds... 00:23:06.548 00:23:06.548 Latency(us) 00:23:06.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.548 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:06.548 Verification LBA range: start 0x0 length 0x2000 00:23:06.548 TLSTESTn1 : 10.03 1907.52 7.45 0.00 0.00 66915.23 8922.98 92355.35 00:23:06.548 =================================================================================================================== 00:23:06.548 Total : 1907.52 7.45 0.00 0.00 66915.23 8922.98 92355.35 00:23:06.548 0 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 467194 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 467194 ']' 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 467194 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 467194 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 467194' 00:23:06.548 killing process with pid 467194 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 467194 00:23:06.548 Received shutdown signal, test time was about 10.000000 seconds 00:23:06.548 00:23:06.548 Latency(us) 00:23:06.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.548 =================================================================================================================== 00:23:06.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.548 [2024-07-25 12:35:39.958358] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:06.548 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 467194 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.gFI8vgMpPO 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFI8vgMpPO 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFI8vgMpPO 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFI8vgMpPO 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gFI8vgMpPO' 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=469094 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 469094 /var/tmp/bdevperf.sock 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 469094 ']' 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.120 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.120 [2024-07-25 12:35:40.323411] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:07.120 [2024-07-25 12:35:40.323488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469094 ] 00:23:07.120 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.120 [2024-07-25 12:35:40.456210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.382 [2024-07-25 12:35:40.617826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.954 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.954 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:07.954 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gFI8vgMpPO 00:23:08.214 [2024-07-25 12:35:41.399080] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.214 [2024-07-25 12:35:41.399206] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:08.214 [2024-07-25 12:35:41.399228] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.gFI8vgMpPO 00:23:08.214 request: 00:23:08.214 { 00:23:08.214 "name": "TLSTEST", 00:23:08.214 "trtype": "tcp", 00:23:08.214 "traddr": "10.0.0.2", 00:23:08.214 "adrfam": "ipv4", 00:23:08.214 "trsvcid": "4420", 00:23:08.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.214 "prchk_reftag": false, 00:23:08.214 "prchk_guard": false, 00:23:08.214 "hdgst": false, 00:23:08.214 "ddgst": false, 00:23:08.214 "psk": "/tmp/tmp.gFI8vgMpPO", 00:23:08.214 "method": "bdev_nvme_attach_controller", 00:23:08.214 "req_id": 1 00:23:08.214 } 00:23:08.214 Got JSON-RPC error response 00:23:08.214 response: 00:23:08.215 { 00:23:08.215 "code": -1, 00:23:08.215 "message": "Operation not permitted" 00:23:08.215 } 00:23:08.215 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 469094 00:23:08.215 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 469094 ']' 00:23:08.215 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 469094 00:23:08.215 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:08.215 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.215 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 469094 00:23:08.215 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:08.215 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:08.215 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 469094' 00:23:08.215 killing process with pid 469094 00:23:08.215 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 469094 00:23:08.215 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.215 00:23:08.215 Latency(us) 00:23:08.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.215 =================================================================================================================== 00:23:08.215 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.215 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 469094 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 466839 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 466839 ']' 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 466839 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 466839 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 466839' 00:23:08.479 killing process with pid 466839 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 466839 00:23:08.479 [2024-07-25 12:35:41.835599] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:08.479 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 466839 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=469350 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 469350 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 469350 ']' 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.780 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.780 [2024-07-25 12:35:42.099710] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:08.780 [2024-07-25 12:35:42.099780] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.780 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.780 [2024-07-25 12:35:42.187949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.062 [2024-07-25 12:35:42.295329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.062 [2024-07-25 12:35:42.295393] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.062 [2024-07-25 12:35:42.295405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.062 [2024-07-25 12:35:42.295414] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.062 [2024-07-25 12:35:42.295422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.062 [2024-07-25 12:35:42.295451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.634 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.634 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:09.634 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.634 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.634 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.634 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.634 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.gFI8vgMpPO 00:23:09.634 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:09.634 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.gFI8vgMpPO 00:23:09.634 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:09.634 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.634 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:09.634 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.634 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.gFI8vgMpPO 00:23:09.634 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gFI8vgMpPO 00:23:09.634 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:09.896 [2024-07-25 12:35:43.203334] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.896 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:10.157 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:10.418 [2024-07-25 12:35:43.588338] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.418 [2024-07-25 12:35:43.588664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.418 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:10.418 malloc0 00:23:10.418 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:10.680 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gFI8vgMpPO 00:23:10.940 [2024-07-25 12:35:44.213256] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:10.940 [2024-07-25 12:35:44.213301] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:10.940 [2024-07-25 12:35:44.213339] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:10.940 request: 00:23:10.940 { 00:23:10.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.940 "host": "nqn.2016-06.io.spdk:host1", 00:23:10.940 "psk": "/tmp/tmp.gFI8vgMpPO", 00:23:10.940 "method": "nvmf_subsystem_add_host", 00:23:10.940 "req_id": 1 00:23:10.940 } 00:23:10.940 Got JSON-RPC error response 00:23:10.941 response: 00:23:10.941 { 00:23:10.941 "code": -32603, 00:23:10.941 "message": "Internal error" 00:23:10.941 } 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 469350 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 469350 ']' 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 469350 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 469350 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 469350' 00:23:10.941 killing process with pid 469350 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 469350 00:23:10.941 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 469350 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.gFI8vgMpPO 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=469936 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 469936 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 469936 ']' 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.202 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.202 [2024-07-25 12:35:44.581680] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:11.202 [2024-07-25 12:35:44.581749] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.202 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.464 [2024-07-25 12:35:44.673782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.464 [2024-07-25 12:35:44.777086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.464 [2024-07-25 12:35:44.777149] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.464 [2024-07-25 12:35:44.777160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.464 [2024-07-25 12:35:44.777169] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.464 [2024-07-25 12:35:44.777178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.464 [2024-07-25 12:35:44.777212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.408 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.408 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:12.408 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.408 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.408 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.408 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.408 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.gFI8vgMpPO 00:23:12.408 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gFI8vgMpPO 00:23:12.408 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:12.408 [2024-07-25 12:35:45.664448] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.408 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.669 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:12.669 [2024-07-25 12:35:46.077553] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:12.669 [2024-07-25 12:35:46.077871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.931 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:12.931 malloc0 00:23:12.931 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:13.192 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gFI8vgMpPO 00:23:13.453 [2024-07-25 12:35:46.670374] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:13.453 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=470289 00:23:13.453 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.453 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.453 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 470289 /var/tmp/bdevperf.sock 00:23:13.453 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 470289 ']' 00:23:13.453 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.453 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.453 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.454 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.454 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.454 [2024-07-25 12:35:46.753854] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:13.454 [2024-07-25 12:35:46.753919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470289 ] 00:23:13.454 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.714 [2024-07-25 12:35:46.890089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.714 [2024-07-25 12:35:47.049929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.287 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.287 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:14.287 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gFI8vgMpPO 00:23:14.549 [2024-07-25 12:35:47.730522] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.549 [2024-07-25 12:35:47.730732] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:14.549 TLSTESTn1 00:23:14.549 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:14.810 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:14.810 "subsystems": [ 00:23:14.810 { 00:23:14.810 "subsystem": "keyring", 00:23:14.810 "config": [] 00:23:14.810 }, 00:23:14.810 { 00:23:14.810 "subsystem": "iobuf", 00:23:14.810 "config": [ 00:23:14.810 { 00:23:14.810 "method": "iobuf_set_options", 00:23:14.810 "params": { 00:23:14.810 "small_pool_count": 8192, 00:23:14.810 "large_pool_count": 1024, 00:23:14.810 "small_bufsize": 8192, 00:23:14.810 "large_bufsize": 135168 00:23:14.810 } 00:23:14.810 } 00:23:14.810 ] 00:23:14.810 }, 00:23:14.810 { 00:23:14.810 "subsystem": "sock", 00:23:14.810 "config": [ 00:23:14.810 { 00:23:14.810 "method": "sock_set_default_impl", 00:23:14.810 "params": { 00:23:14.810 "impl_name": "posix" 00:23:14.810 } 00:23:14.810 }, 00:23:14.810 { 00:23:14.810 "method": "sock_impl_set_options", 00:23:14.810 "params": { 00:23:14.810 "impl_name": "ssl", 00:23:14.810 "recv_buf_size": 4096, 00:23:14.810 "send_buf_size": 4096, 00:23:14.810 "enable_recv_pipe": true, 00:23:14.810 "enable_quickack": false, 00:23:14.810 "enable_placement_id": 0, 00:23:14.810 "enable_zerocopy_send_server": true, 00:23:14.810 "enable_zerocopy_send_client": false, 00:23:14.810 "zerocopy_threshold": 0, 00:23:14.810 "tls_version": 0, 00:23:14.810 "enable_ktls": false 00:23:14.810 } 00:23:14.810 }, 00:23:14.810 { 00:23:14.810 "method": "sock_impl_set_options", 00:23:14.810 "params": { 00:23:14.810 "impl_name": "posix", 00:23:14.811 "recv_buf_size": 2097152, 00:23:14.811 "send_buf_size": 2097152, 00:23:14.811 "enable_recv_pipe": true, 00:23:14.811 "enable_quickack": false, 00:23:14.811 "enable_placement_id": 0, 00:23:14.811 "enable_zerocopy_send_server": true, 00:23:14.811 "enable_zerocopy_send_client": false, 00:23:14.811 "zerocopy_threshold": 0, 00:23:14.811 "tls_version": 0, 00:23:14.811 "enable_ktls": false 00:23:14.811 } 00:23:14.811 } 00:23:14.811 ] 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "subsystem": "vmd", 00:23:14.811 "config": [] 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "subsystem": "accel", 00:23:14.811 "config": [ 00:23:14.811 { 00:23:14.811 "method": "accel_set_options", 00:23:14.811 "params": { 00:23:14.811 "small_cache_size": 128, 00:23:14.811 "large_cache_size": 16, 00:23:14.811 "task_count": 2048, 00:23:14.811 "sequence_count": 2048, 00:23:14.811 "buf_count": 2048 00:23:14.811 } 00:23:14.811 } 00:23:14.811 ] 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "subsystem": "bdev", 00:23:14.811 "config": [ 00:23:14.811 { 00:23:14.811 "method": "bdev_set_options", 00:23:14.811 "params": { 00:23:14.811 "bdev_io_pool_size": 65535, 00:23:14.811 "bdev_io_cache_size": 256, 00:23:14.811 "bdev_auto_examine": true, 00:23:14.811 "iobuf_small_cache_size": 128, 00:23:14.811 "iobuf_large_cache_size": 16 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "bdev_raid_set_options", 00:23:14.811 "params": { 00:23:14.811 "process_window_size_kb": 1024, 00:23:14.811 "process_max_bandwidth_mb_sec": 0 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "bdev_iscsi_set_options", 00:23:14.811 "params": { 00:23:14.811 "timeout_sec": 30 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "bdev_nvme_set_options", 00:23:14.811 "params": { 00:23:14.811 "action_on_timeout": "none", 00:23:14.811 "timeout_us": 0, 00:23:14.811 "timeout_admin_us": 0, 00:23:14.811 "keep_alive_timeout_ms": 10000, 00:23:14.811 "arbitration_burst": 0, 00:23:14.811 "low_priority_weight": 0, 00:23:14.811 "medium_priority_weight": 0, 00:23:14.811 "high_priority_weight": 0, 00:23:14.811 "nvme_adminq_poll_period_us": 10000, 00:23:14.811 "nvme_ioq_poll_period_us": 0, 00:23:14.811 "io_queue_requests": 0, 00:23:14.811 "delay_cmd_submit": true, 00:23:14.811 "transport_retry_count": 4, 00:23:14.811 "bdev_retry_count": 3, 00:23:14.811 "transport_ack_timeout": 0, 00:23:14.811 "ctrlr_loss_timeout_sec": 0, 00:23:14.811 "reconnect_delay_sec": 0, 00:23:14.811 "fast_io_fail_timeout_sec": 0, 00:23:14.811 "disable_auto_failback": false, 00:23:14.811 "generate_uuids": false, 00:23:14.811 "transport_tos": 0, 00:23:14.811 "nvme_error_stat": false, 00:23:14.811 "rdma_srq_size": 0, 00:23:14.811 "io_path_stat": false, 00:23:14.811 "allow_accel_sequence": false, 00:23:14.811 "rdma_max_cq_size": 0, 00:23:14.811 "rdma_cm_event_timeout_ms": 0, 00:23:14.811 "dhchap_digests": [ 00:23:14.811 "sha256", 00:23:14.811 "sha384", 00:23:14.811 "sha512" 00:23:14.811 ], 00:23:14.811 "dhchap_dhgroups": [ 00:23:14.811 "null", 00:23:14.811 "ffdhe2048", 00:23:14.811 "ffdhe3072", 00:23:14.811 "ffdhe4096", 00:23:14.811 "ffdhe6144", 00:23:14.811 "ffdhe8192" 00:23:14.811 ] 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "bdev_nvme_set_hotplug", 00:23:14.811 "params": { 00:23:14.811 "period_us": 100000, 00:23:14.811 "enable": false 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "bdev_malloc_create", 00:23:14.811 "params": { 00:23:14.811 "name": "malloc0", 00:23:14.811 "num_blocks": 8192, 00:23:14.811 "block_size": 4096, 00:23:14.811 "physical_block_size": 4096, 00:23:14.811 "uuid": "d22109c7-ceb1-4382-a5da-15571c281b4a", 00:23:14.811 "optimal_io_boundary": 0, 00:23:14.811 "md_size": 0, 00:23:14.811 "dif_type": 0, 00:23:14.811 "dif_is_head_of_md": false, 00:23:14.811 "dif_pi_format": 0 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "bdev_wait_for_examine" 00:23:14.811 } 00:23:14.811 ] 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "subsystem": "nbd", 00:23:14.811 "config": [] 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "subsystem": "scheduler", 00:23:14.811 "config": [ 00:23:14.811 { 00:23:14.811 "method": "framework_set_scheduler", 00:23:14.811 "params": { 00:23:14.811 "name": "static" 00:23:14.811 } 00:23:14.811 } 00:23:14.811 ] 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "subsystem": "nvmf", 00:23:14.811 "config": [ 00:23:14.811 { 00:23:14.811 "method": "nvmf_set_config", 00:23:14.811 "params": { 00:23:14.811 "discovery_filter": "match_any", 00:23:14.811 "admin_cmd_passthru": { 00:23:14.811 "identify_ctrlr": false 00:23:14.811 } 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "nvmf_set_max_subsystems", 00:23:14.811 "params": { 00:23:14.811 "max_subsystems": 1024 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "nvmf_set_crdt", 00:23:14.811 "params": { 00:23:14.811 "crdt1": 0, 00:23:14.811 "crdt2": 0, 00:23:14.811 "crdt3": 0 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "nvmf_create_transport", 00:23:14.811 "params": { 00:23:14.811 "trtype": "TCP", 00:23:14.811 "max_queue_depth": 128, 00:23:14.811 "max_io_qpairs_per_ctrlr": 127, 00:23:14.811 "in_capsule_data_size": 4096, 00:23:14.811 "max_io_size": 131072, 00:23:14.811 "io_unit_size": 131072, 00:23:14.811 "max_aq_depth": 128, 00:23:14.811 "num_shared_buffers": 511, 00:23:14.811 "buf_cache_size": 4294967295, 00:23:14.811 "dif_insert_or_strip": false, 00:23:14.811 "zcopy": false, 00:23:14.811 "c2h_success": false, 00:23:14.811 "sock_priority": 0, 00:23:14.811 "abort_timeout_sec": 1, 00:23:14.811 "ack_timeout": 0, 00:23:14.811 "data_wr_pool_size": 0 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "nvmf_create_subsystem", 00:23:14.811 "params": { 00:23:14.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.811 "allow_any_host": false, 00:23:14.811 "serial_number": "SPDK00000000000001", 00:23:14.811 "model_number": "SPDK bdev Controller", 00:23:14.811 "max_namespaces": 10, 00:23:14.811 "min_cntlid": 1, 00:23:14.811 "max_cntlid": 65519, 00:23:14.811 "ana_reporting": false 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "nvmf_subsystem_add_host", 00:23:14.811 "params": { 00:23:14.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.811 "host": "nqn.2016-06.io.spdk:host1", 00:23:14.811 "psk": "/tmp/tmp.gFI8vgMpPO" 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "nvmf_subsystem_add_ns", 00:23:14.811 "params": { 00:23:14.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.811 "namespace": { 00:23:14.811 "nsid": 1, 00:23:14.811 "bdev_name": "malloc0", 00:23:14.811 "nguid": "D22109C7CEB14382A5DA15571C281B4A", 00:23:14.811 "uuid": "d22109c7-ceb1-4382-a5da-15571c281b4a", 00:23:14.811 "no_auto_visible": false 00:23:14.811 } 00:23:14.811 } 00:23:14.811 }, 00:23:14.811 { 00:23:14.811 "method": "nvmf_subsystem_add_listener", 00:23:14.811 "params": { 00:23:14.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.812 "listen_address": { 00:23:14.812 "trtype": "TCP", 00:23:14.812 "adrfam": "IPv4", 00:23:14.812 "traddr": "10.0.0.2", 00:23:14.812 "trsvcid": "4420" 00:23:14.812 }, 00:23:14.812 "secure_channel": true 00:23:14.812 } 00:23:14.812 } 00:23:14.812 ] 00:23:14.812 } 00:23:14.812 ] 00:23:14.812 }' 00:23:14.812 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:15.073 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:15.073 "subsystems": [ 00:23:15.073 { 00:23:15.073 "subsystem": "keyring", 00:23:15.073 "config": [] 00:23:15.073 }, 00:23:15.073 { 00:23:15.073 "subsystem": "iobuf", 00:23:15.073 "config": [ 00:23:15.073 { 00:23:15.073 "method": "iobuf_set_options", 00:23:15.073 "params": { 00:23:15.073 "small_pool_count": 8192, 00:23:15.073 "large_pool_count": 1024, 00:23:15.073 "small_bufsize": 8192, 00:23:15.073 "large_bufsize": 135168 00:23:15.073 } 00:23:15.073 } 00:23:15.073 ] 00:23:15.073 }, 00:23:15.073 { 00:23:15.073 "subsystem": "sock", 00:23:15.073 "config": [ 00:23:15.073 { 00:23:15.073 "method": "sock_set_default_impl", 00:23:15.073 "params": { 00:23:15.073 "impl_name": "posix" 00:23:15.073 } 00:23:15.073 }, 00:23:15.073 { 00:23:15.073 "method": "sock_impl_set_options", 00:23:15.073 "params": { 00:23:15.073 "impl_name": "ssl", 00:23:15.073 "recv_buf_size": 4096, 00:23:15.073 "send_buf_size": 4096, 00:23:15.073 "enable_recv_pipe": true, 00:23:15.073 "enable_quickack": false, 00:23:15.073 "enable_placement_id": 0, 00:23:15.073 "enable_zerocopy_send_server": true, 00:23:15.073 "enable_zerocopy_send_client": false, 00:23:15.073 "zerocopy_threshold": 0, 00:23:15.073 "tls_version": 0, 00:23:15.073 "enable_ktls": false 00:23:15.073 } 00:23:15.073 }, 00:23:15.073 { 00:23:15.073 "method": "sock_impl_set_options", 00:23:15.073 "params": { 00:23:15.073 "impl_name": "posix", 00:23:15.073 "recv_buf_size": 2097152, 00:23:15.073 "send_buf_size": 2097152, 00:23:15.073 "enable_recv_pipe": true, 00:23:15.073 "enable_quickack": false, 00:23:15.073 "enable_placement_id": 0, 00:23:15.073 "enable_zerocopy_send_server": true, 00:23:15.073 "enable_zerocopy_send_client": false, 00:23:15.073 "zerocopy_threshold": 0, 00:23:15.073 "tls_version": 0, 00:23:15.073 "enable_ktls": false 00:23:15.073 } 00:23:15.073 } 00:23:15.073 ] 00:23:15.073 }, 00:23:15.073 { 00:23:15.073 "subsystem": "vmd", 00:23:15.073 "config": [] 00:23:15.073 }, 00:23:15.073 { 00:23:15.074 "subsystem": "accel", 00:23:15.074 "config": [ 00:23:15.074 { 00:23:15.074 "method": "accel_set_options", 00:23:15.074 "params": { 00:23:15.074 "small_cache_size": 128, 00:23:15.074 "large_cache_size": 16, 00:23:15.074 "task_count": 2048, 00:23:15.074 "sequence_count": 2048, 00:23:15.074 "buf_count": 2048 00:23:15.074 } 00:23:15.074 } 00:23:15.074 ] 00:23:15.074 }, 00:23:15.074 { 00:23:15.074 "subsystem": "bdev", 00:23:15.074 "config": [ 00:23:15.074 { 00:23:15.074 "method": "bdev_set_options", 00:23:15.074 "params": { 00:23:15.074 "bdev_io_pool_size": 65535, 00:23:15.074 "bdev_io_cache_size": 256, 00:23:15.074 "bdev_auto_examine": true, 00:23:15.074 "iobuf_small_cache_size": 128, 00:23:15.074 "iobuf_large_cache_size": 16 00:23:15.074 } 00:23:15.074 }, 00:23:15.074 { 00:23:15.074 "method": "bdev_raid_set_options", 00:23:15.074 "params": { 00:23:15.074 "process_window_size_kb": 1024, 00:23:15.074 "process_max_bandwidth_mb_sec": 0 00:23:15.074 } 00:23:15.074 }, 00:23:15.074 { 00:23:15.074 "method": "bdev_iscsi_set_options", 00:23:15.074 "params": { 00:23:15.074 "timeout_sec": 30 00:23:15.074 } 00:23:15.074 }, 00:23:15.074 { 00:23:15.074 "method": "bdev_nvme_set_options", 00:23:15.074 "params": { 00:23:15.074 "action_on_timeout": "none", 00:23:15.074 "timeout_us": 0, 00:23:15.074 "timeout_admin_us": 0, 00:23:15.074 "keep_alive_timeout_ms": 10000, 00:23:15.074 "arbitration_burst": 0, 00:23:15.074 "low_priority_weight": 0, 00:23:15.074 "medium_priority_weight": 0, 00:23:15.074 "high_priority_weight": 0, 00:23:15.074 "nvme_adminq_poll_period_us": 10000, 00:23:15.074 "nvme_ioq_poll_period_us": 0, 00:23:15.074 "io_queue_requests": 512, 00:23:15.074 "delay_cmd_submit": true, 00:23:15.074 "transport_retry_count": 4, 00:23:15.074 "bdev_retry_count": 3, 00:23:15.074 "transport_ack_timeout": 0, 00:23:15.074 "ctrlr_loss_timeout_sec": 0, 00:23:15.074 "reconnect_delay_sec": 0, 00:23:15.074 "fast_io_fail_timeout_sec": 0, 00:23:15.074 "disable_auto_failback": false, 00:23:15.074 "generate_uuids": false, 00:23:15.074 "transport_tos": 0, 00:23:15.074 "nvme_error_stat": false, 00:23:15.074 "rdma_srq_size": 0, 00:23:15.074 "io_path_stat": false, 00:23:15.074 "allow_accel_sequence": false, 00:23:15.074 "rdma_max_cq_size": 0, 00:23:15.074 "rdma_cm_event_timeout_ms": 0, 00:23:15.074 "dhchap_digests": [ 00:23:15.074 "sha256", 00:23:15.074 "sha384", 00:23:15.074 "sha512" 00:23:15.074 ], 00:23:15.074 "dhchap_dhgroups": [ 00:23:15.074 "null", 00:23:15.074 "ffdhe2048", 00:23:15.074 "ffdhe3072", 00:23:15.074 "ffdhe4096", 00:23:15.074 "ffdhe6144", 00:23:15.074 "ffdhe8192" 00:23:15.074 ] 00:23:15.074 } 00:23:15.074 }, 00:23:15.074 { 00:23:15.074 "method": "bdev_nvme_attach_controller", 00:23:15.074 "params": { 00:23:15.074 "name": "TLSTEST", 00:23:15.074 "trtype": "TCP", 00:23:15.074 "adrfam": "IPv4", 00:23:15.074 "traddr": "10.0.0.2", 00:23:15.074 "trsvcid": "4420", 00:23:15.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.074 "prchk_reftag": false, 00:23:15.074 "prchk_guard": false, 00:23:15.074 "ctrlr_loss_timeout_sec": 0, 00:23:15.074 "reconnect_delay_sec": 0, 00:23:15.074 "fast_io_fail_timeout_sec": 0, 00:23:15.074 "psk": "/tmp/tmp.gFI8vgMpPO", 00:23:15.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.074 "hdgst": false, 00:23:15.074 "ddgst": false 00:23:15.074 } 00:23:15.074 }, 00:23:15.074 { 00:23:15.074 "method": "bdev_nvme_set_hotplug", 00:23:15.074 "params": { 00:23:15.074 "period_us": 100000, 00:23:15.074 "enable": false 00:23:15.074 } 00:23:15.074 }, 00:23:15.074 { 00:23:15.074 "method": "bdev_wait_for_examine" 00:23:15.074 } 00:23:15.074 ] 00:23:15.074 }, 00:23:15.074 { 00:23:15.074 "subsystem": "nbd", 00:23:15.074 "config": [] 00:23:15.074 } 00:23:15.074 ] 00:23:15.074 }' 00:23:15.074 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 470289 00:23:15.074 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 470289 ']' 00:23:15.074 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 470289 00:23:15.074 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:15.074 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.074 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 470289 00:23:15.336 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:15.336 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:15.336 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 470289' 00:23:15.336 killing process with pid 470289 00:23:15.336 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 470289 00:23:15.336 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.336 00:23:15.336 Latency(us) 00:23:15.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.336 =================================================================================================================== 00:23:15.336 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.336 [2024-07-25 12:35:48.512401] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:15.336 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 470289 00:23:15.597 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 469936 00:23:15.597 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 469936 ']' 00:23:15.597 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 469936 00:23:15.597 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:15.597 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.597 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 469936 00:23:15.597 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:15.597 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:15.597 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 469936' 00:23:15.597 killing process with pid 469936 00:23:15.597 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 469936 00:23:15.597 [2024-07-25 12:35:48.865566] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:15.597 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 469936 00:23:15.859 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:15.859 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.859 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:15.859 "subsystems": [ 00:23:15.859 { 00:23:15.859 "subsystem": "keyring", 00:23:15.859 "config": [] 00:23:15.859 }, 00:23:15.859 { 00:23:15.859 "subsystem": "iobuf", 00:23:15.859 "config": [ 00:23:15.859 { 00:23:15.859 "method": "iobuf_set_options", 00:23:15.859 "params": { 00:23:15.859 "small_pool_count": 8192, 00:23:15.859 "large_pool_count": 1024, 00:23:15.859 "small_bufsize": 8192, 00:23:15.859 "large_bufsize": 135168 00:23:15.859 } 00:23:15.859 } 00:23:15.859 ] 00:23:15.859 }, 00:23:15.859 { 00:23:15.859 "subsystem": "sock", 00:23:15.859 "config": [ 00:23:15.859 { 00:23:15.859 "method": "sock_set_default_impl", 00:23:15.859 "params": { 00:23:15.859 "impl_name": "posix" 00:23:15.859 } 00:23:15.859 }, 00:23:15.859 { 00:23:15.859 "method": "sock_impl_set_options", 00:23:15.859 "params": { 00:23:15.859 "impl_name": "ssl", 00:23:15.859 "recv_buf_size": 4096, 00:23:15.859 "send_buf_size": 4096, 00:23:15.859 "enable_recv_pipe": true, 00:23:15.859 "enable_quickack": false, 00:23:15.859 "enable_placement_id": 0, 00:23:15.859 "enable_zerocopy_send_server": true, 00:23:15.859 "enable_zerocopy_send_client": false, 00:23:15.859 "zerocopy_threshold": 0, 00:23:15.859 "tls_version": 0, 00:23:15.859 "enable_ktls": false 00:23:15.859 } 00:23:15.859 }, 00:23:15.859 { 00:23:15.859 "method": "sock_impl_set_options", 00:23:15.859 "params": { 00:23:15.859 "impl_name": "posix", 00:23:15.859 "recv_buf_size": 2097152, 00:23:15.859 "send_buf_size": 2097152, 00:23:15.859 "enable_recv_pipe": true, 00:23:15.859 "enable_quickack": false, 00:23:15.859 "enable_placement_id": 0, 00:23:15.859 "enable_zerocopy_send_server": true, 00:23:15.859 "enable_zerocopy_send_client": false, 00:23:15.859 "zerocopy_threshold": 0, 00:23:15.859 "tls_version": 0, 00:23:15.859 "enable_ktls": false 00:23:15.859 } 00:23:15.859 } 00:23:15.859 ] 00:23:15.859 }, 00:23:15.859 { 00:23:15.859 "subsystem": "vmd", 00:23:15.859 "config": [] 00:23:15.859 }, 00:23:15.859 { 00:23:15.859 "subsystem": "accel", 00:23:15.859 "config": [ 00:23:15.859 { 00:23:15.859 "method": "accel_set_options", 00:23:15.859 "params": { 00:23:15.859 "small_cache_size": 128, 00:23:15.859 "large_cache_size": 16, 00:23:15.859 "task_count": 2048, 00:23:15.859 "sequence_count": 2048, 00:23:15.859 "buf_count": 2048 00:23:15.859 } 00:23:15.859 } 00:23:15.859 ] 00:23:15.859 }, 00:23:15.859 { 00:23:15.859 "subsystem": "bdev", 00:23:15.859 "config": [ 00:23:15.859 { 00:23:15.859 "method": "bdev_set_options", 00:23:15.859 "params": { 00:23:15.859 "bdev_io_pool_size": 65535, 00:23:15.859 "bdev_io_cache_size": 256, 00:23:15.859 "bdev_auto_examine": true, 00:23:15.859 "iobuf_small_cache_size": 128, 00:23:15.859 "iobuf_large_cache_size": 16 00:23:15.859 } 00:23:15.859 }, 00:23:15.859 { 00:23:15.859 "method": "bdev_raid_set_options", 00:23:15.859 "params": { 00:23:15.859 "process_window_size_kb": 1024, 00:23:15.859 "process_max_bandwidth_mb_sec": 0 00:23:15.859 } 00:23:15.859 }, 00:23:15.859 { 00:23:15.859 "method": "bdev_iscsi_set_options", 00:23:15.859 "params": { 00:23:15.859 "timeout_sec": 30 00:23:15.859 } 00:23:15.859 }, 00:23:15.859 { 00:23:15.859 "method": "bdev_nvme_set_options", 00:23:15.859 "params": { 00:23:15.859 "action_on_timeout": "none", 00:23:15.859 "timeout_us": 0, 00:23:15.859 "timeout_admin_us": 0, 00:23:15.859 "keep_alive_timeout_ms": 10000, 00:23:15.859 "arbitration_burst": 0, 00:23:15.859 "low_priority_weight": 0, 00:23:15.859 "medium_priority_weight": 0, 00:23:15.859 "high_priority_weight": 0, 00:23:15.859 "nvme_adminq_poll_period_us": 10000, 00:23:15.859 "nvme_ioq_poll_period_us": 0, 00:23:15.859 "io_queue_requests": 0, 00:23:15.859 "delay_cmd_submit": true, 00:23:15.859 "transport_retry_count": 4, 00:23:15.859 "bdev_retry_count": 3, 00:23:15.859 "transport_ack_timeout": 0, 00:23:15.859 "ctrlr_loss_timeout_sec": 0, 00:23:15.859 "reconnect_delay_sec": 0, 00:23:15.859 "fast_io_fail_timeout_sec": 0, 00:23:15.859 "disable_auto_failback": false, 00:23:15.859 "generate_uuids": false, 00:23:15.859 "transport_tos": 0, 00:23:15.859 "nvme_error_stat": false, 00:23:15.859 "rdma_srq_size": 0, 00:23:15.859 "io_path_stat": false, 00:23:15.859 "allow_accel_sequence": false, 00:23:15.859 "rdma_max_cq_size": 0, 00:23:15.859 "rdma_cm_event_timeout_ms": 0, 00:23:15.859 "dhchap_digests": [ 00:23:15.859 "sha256", 00:23:15.859 "sha384", 00:23:15.860 "sha512" 00:23:15.860 ], 00:23:15.860 "dhchap_dhgroups": [ 00:23:15.860 "null", 00:23:15.860 "ffdhe2048", 00:23:15.860 "ffdhe3072", 00:23:15.860 "ffdhe4096", 00:23:15.860 "ffdhe6144", 00:23:15.860 "ffdhe8192" 00:23:15.860 ] 00:23:15.860 } 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "method": "bdev_nvme_set_hotplug", 00:23:15.860 "params": { 00:23:15.860 "period_us": 100000, 00:23:15.860 "enable": false 00:23:15.860 } 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "method": "bdev_malloc_create", 00:23:15.860 "params": { 00:23:15.860 "name": "malloc0", 00:23:15.860 "num_blocks": 8192, 00:23:15.860 "block_size": 4096, 00:23:15.860 "physical_block_size": 4096, 00:23:15.860 "uuid": "d22109c7-ceb1-4382-a5da-15571c281b4a", 00:23:15.860 "optimal_io_boundary": 0, 00:23:15.860 "md_size": 0, 00:23:15.860 "dif_type": 0, 00:23:15.860 "dif_is_head_of_md": false, 00:23:15.860 "dif_pi_format": 0 00:23:15.860 } 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "method": "bdev_wait_for_examine" 00:23:15.860 } 00:23:15.860 ] 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "subsystem": "nbd", 00:23:15.860 "config": [] 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "subsystem": "scheduler", 00:23:15.860 "config": [ 00:23:15.860 { 00:23:15.860 "method": "framework_set_scheduler", 00:23:15.860 "params": { 00:23:15.860 "name": "static" 00:23:15.860 } 00:23:15.860 } 00:23:15.860 ] 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "subsystem": "nvmf", 00:23:15.860 "config": [ 00:23:15.860 { 00:23:15.860 "method": "nvmf_set_config", 00:23:15.860 "params": { 00:23:15.860 "discovery_filter": "match_any", 00:23:15.860 "admin_cmd_passthru": { 00:23:15.860 "identify_ctrlr": false 00:23:15.860 } 00:23:15.860 } 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "method": "nvmf_set_max_subsystems", 00:23:15.860 "params": { 00:23:15.860 "max_subsystems": 1024 00:23:15.860 } 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "method": "nvmf_set_crdt", 00:23:15.860 "params": { 00:23:15.860 "crdt1": 0, 00:23:15.860 "crdt2": 0, 00:23:15.860 "crdt3": 0 00:23:15.860 } 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "method": "nvmf_create_transport", 00:23:15.860 "params": { 00:23:15.860 "trtype": "TCP", 00:23:15.860 "max_queue_depth": 128, 00:23:15.860 "max_io_qpairs_per_ctrlr": 127, 00:23:15.860 "in_capsule_data_size": 4096, 00:23:15.860 "max_io_size": 131072, 00:23:15.860 "io_unit_size": 131072, 00:23:15.860 "max_aq_depth": 128, 00:23:15.860 "num_shared_buffers": 511, 00:23:15.860 "buf_cache_size": 4294967295, 00:23:15.860 "dif_insert_or_strip": false, 00:23:15.860 "zcopy": false, 00:23:15.860 "c2h_success": false, 00:23:15.860 "sock_priority": 0, 00:23:15.860 "abort_timeout_sec": 1, 00:23:15.860 "ack_timeout": 0, 00:23:15.860 "data_wr_pool_size": 0 00:23:15.860 } 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "method": "nvmf_create_subsystem", 00:23:15.860 "params": { 00:23:15.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.860 "allow_any_host": false, 00:23:15.860 "serial_number": "SPDK00000000000001", 00:23:15.860 "model_number": "SPDK bdev Controller", 00:23:15.860 "max_namespaces": 10, 00:23:15.860 "min_cntlid": 1, 00:23:15.860 "max_cntlid": 65519, 00:23:15.860 "ana_reporting": false 00:23:15.860 } 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "method": "nvmf_subsystem_add_host", 00:23:15.860 "params": { 00:23:15.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.860 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.860 "psk": "/tmp/tmp.gFI8vgMpPO" 00:23:15.860 } 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "method": "nvmf_subsystem_add_ns", 00:23:15.860 "params": { 00:23:15.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.860 "namespace": { 00:23:15.860 "nsid": 1, 00:23:15.860 "bdev_name": "malloc0", 00:23:15.860 "nguid": "D22109C7CEB14382A5DA15571C281B4A", 00:23:15.860 "uuid": "d22109c7-ceb1-4382-a5da-15571c281b4a", 00:23:15.860 "no_auto_visible": false 00:23:15.860 } 00:23:15.860 } 00:23:15.860 }, 00:23:15.860 { 00:23:15.860 "method": "nvmf_subsystem_add_listener", 00:23:15.860 "params": { 00:23:15.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.860 "listen_address": { 00:23:15.860 "trtype": "TCP", 00:23:15.860 "adrfam": "IPv4", 00:23:15.860 "traddr": "10.0.0.2", 00:23:15.860 "trsvcid": "4420" 00:23:15.860 }, 00:23:15.860 "secure_channel": true 00:23:15.860 } 00:23:15.860 } 00:23:15.860 ] 00:23:15.860 } 00:23:15.860 ] 00:23:15.860 }' 00:23:15.860 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:15.860 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.860 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=470624 00:23:15.860 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 470624 00:23:15.860 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:15.860 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 470624 ']' 00:23:15.860 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.860 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.860 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.860 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.860 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.860 [2024-07-25 12:35:49.141014] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:15.860 [2024-07-25 12:35:49.141107] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.860 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.860 [2024-07-25 12:35:49.228962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.121 [2024-07-25 12:35:49.334670] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.121 [2024-07-25 12:35:49.334734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.121 [2024-07-25 12:35:49.334746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.121 [2024-07-25 12:35:49.334756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.121 [2024-07-25 12:35:49.334764] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.121 [2024-07-25 12:35:49.334849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.381 [2024-07-25 12:35:49.557310] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.381 [2024-07-25 12:35:49.588191] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:16.381 [2024-07-25 12:35:49.604262] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.381 [2024-07-25 12:35:49.604626] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.643 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.643 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:16.643 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:16.643 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:16.643 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.643 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.643 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=470812 00:23:16.643 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 470812 /var/tmp/bdevperf.sock 00:23:16.643 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 470812 ']' 00:23:16.643 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.643 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.643 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.643 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.643 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:16.643 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.643 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:16.643 "subsystems": [ 00:23:16.643 { 00:23:16.643 "subsystem": "keyring", 00:23:16.643 "config": [] 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "subsystem": "iobuf", 00:23:16.643 "config": [ 00:23:16.643 { 00:23:16.643 "method": "iobuf_set_options", 00:23:16.643 "params": { 00:23:16.643 "small_pool_count": 8192, 00:23:16.643 "large_pool_count": 1024, 00:23:16.643 "small_bufsize": 8192, 00:23:16.643 "large_bufsize": 135168 00:23:16.643 } 00:23:16.643 } 00:23:16.643 ] 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "subsystem": "sock", 00:23:16.643 "config": [ 00:23:16.643 { 00:23:16.643 "method": "sock_set_default_impl", 00:23:16.643 "params": { 00:23:16.643 "impl_name": "posix" 00:23:16.643 } 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "method": "sock_impl_set_options", 00:23:16.643 "params": { 00:23:16.643 "impl_name": "ssl", 00:23:16.643 "recv_buf_size": 4096, 00:23:16.643 "send_buf_size": 4096, 00:23:16.643 "enable_recv_pipe": true, 00:23:16.643 "enable_quickack": false, 00:23:16.643 "enable_placement_id": 0, 00:23:16.643 "enable_zerocopy_send_server": true, 00:23:16.643 "enable_zerocopy_send_client": false, 00:23:16.643 "zerocopy_threshold": 0, 00:23:16.643 "tls_version": 0, 00:23:16.643 "enable_ktls": false 00:23:16.643 } 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "method": "sock_impl_set_options", 00:23:16.643 "params": { 00:23:16.643 "impl_name": "posix", 00:23:16.643 "recv_buf_size": 2097152, 00:23:16.643 "send_buf_size": 2097152, 00:23:16.643 "enable_recv_pipe": true, 00:23:16.643 "enable_quickack": false, 00:23:16.643 "enable_placement_id": 0, 00:23:16.643 "enable_zerocopy_send_server": true, 00:23:16.643 "enable_zerocopy_send_client": false, 00:23:16.643 "zerocopy_threshold": 0, 00:23:16.643 "tls_version": 0, 00:23:16.643 "enable_ktls": false 00:23:16.643 } 00:23:16.643 } 00:23:16.643 ] 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "subsystem": "vmd", 00:23:16.643 "config": [] 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "subsystem": "accel", 00:23:16.643 "config": [ 00:23:16.643 { 00:23:16.643 "method": "accel_set_options", 00:23:16.643 "params": { 00:23:16.643 "small_cache_size": 128, 00:23:16.643 "large_cache_size": 16, 00:23:16.643 "task_count": 2048, 00:23:16.643 "sequence_count": 2048, 00:23:16.643 "buf_count": 2048 00:23:16.643 } 00:23:16.643 } 00:23:16.643 ] 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "subsystem": "bdev", 00:23:16.643 "config": [ 00:23:16.643 { 00:23:16.643 "method": "bdev_set_options", 00:23:16.643 "params": { 00:23:16.643 "bdev_io_pool_size": 65535, 00:23:16.643 "bdev_io_cache_size": 256, 00:23:16.643 "bdev_auto_examine": true, 00:23:16.643 "iobuf_small_cache_size": 128, 00:23:16.643 "iobuf_large_cache_size": 16 00:23:16.643 } 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "method": "bdev_raid_set_options", 00:23:16.643 "params": { 00:23:16.643 "process_window_size_kb": 1024, 00:23:16.643 "process_max_bandwidth_mb_sec": 0 00:23:16.643 } 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "method": "bdev_iscsi_set_options", 00:23:16.643 "params": { 00:23:16.643 "timeout_sec": 30 00:23:16.643 } 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "method": "bdev_nvme_set_options", 00:23:16.643 "params": { 00:23:16.643 "action_on_timeout": "none", 00:23:16.643 "timeout_us": 0, 00:23:16.643 "timeout_admin_us": 0, 00:23:16.643 "keep_alive_timeout_ms": 10000, 00:23:16.643 "arbitration_burst": 0, 00:23:16.643 "low_priority_weight": 0, 00:23:16.643 "medium_priority_weight": 0, 00:23:16.643 "high_priority_weight": 0, 00:23:16.643 "nvme_adminq_poll_period_us": 10000, 00:23:16.643 "nvme_ioq_poll_period_us": 0, 00:23:16.643 "io_queue_requests": 512, 00:23:16.643 "delay_cmd_submit": true, 00:23:16.643 "transport_retry_count": 4, 00:23:16.643 "bdev_retry_count": 3, 00:23:16.643 "transport_ack_timeout": 0, 00:23:16.643 "ctrlr_loss_timeout_sec": 0, 00:23:16.643 "reconnect_delay_sec": 0, 00:23:16.643 "fast_io_fail_timeout_sec": 0, 00:23:16.643 "disable_auto_failback": false, 00:23:16.643 "generate_uuids": false, 00:23:16.643 "transport_tos": 0, 00:23:16.643 "nvme_error_stat": false, 00:23:16.643 "rdma_srq_size": 0, 00:23:16.643 "io_path_stat": false, 00:23:16.643 "allow_accel_sequence": false, 00:23:16.643 "rdma_max_cq_size": 0, 00:23:16.643 "rdma_cm_event_timeout_ms": 0, 00:23:16.643 "dhchap_digests": [ 00:23:16.643 "sha256", 00:23:16.643 "sha384", 00:23:16.643 "sha512" 00:23:16.643 ], 00:23:16.643 "dhchap_dhgroups": [ 00:23:16.643 "null", 00:23:16.643 "ffdhe2048", 00:23:16.643 "ffdhe3072", 00:23:16.643 "ffdhe4096", 00:23:16.643 "ffdhe6144", 00:23:16.643 "ffdhe8192" 00:23:16.643 ] 00:23:16.643 } 00:23:16.643 }, 00:23:16.643 { 00:23:16.643 "method": "bdev_nvme_attach_controller", 00:23:16.643 "params": { 00:23:16.643 "name": "TLSTEST", 00:23:16.643 "trtype": "TCP", 00:23:16.643 "adrfam": "IPv4", 00:23:16.644 "traddr": "10.0.0.2", 00:23:16.644 "trsvcid": "4420", 00:23:16.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.644 "prchk_reftag": false, 00:23:16.644 "prchk_guard": false, 00:23:16.644 "ctrlr_loss_timeout_sec": 0, 00:23:16.644 "reconnect_delay_sec": 0, 00:23:16.644 "fast_io_fail_timeout_sec": 0, 00:23:16.644 "psk": "/tmp/tmp.gFI8vgMpPO", 00:23:16.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.644 "hdgst": false, 00:23:16.644 "ddgst": false 00:23:16.644 } 00:23:16.644 }, 00:23:16.644 { 00:23:16.644 "method": "bdev_nvme_set_hotplug", 00:23:16.644 "params": { 00:23:16.644 "period_us": 100000, 00:23:16.644 "enable": false 00:23:16.644 } 00:23:16.644 }, 00:23:16.644 { 00:23:16.644 "method": "bdev_wait_for_examine" 00:23:16.644 } 00:23:16.644 ] 00:23:16.644 }, 00:23:16.644 { 00:23:16.644 "subsystem": "nbd", 00:23:16.644 "config": [] 00:23:16.644 } 00:23:16.644 ] 00:23:16.644 }' 00:23:16.905 [2024-07-25 12:35:50.094133] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:16.905 [2024-07-25 12:35:50.094230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470812 ] 00:23:16.905 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.905 [2024-07-25 12:35:50.235357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.165 [2024-07-25 12:35:50.397471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.426 [2024-07-25 12:35:50.590451] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.426 [2024-07-25 12:35:50.590664] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:17.687 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.687 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:17.687 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:17.687 Running I/O for 10 seconds... 00:23:29.916 00:23:29.916 Latency(us) 00:23:29.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.916 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:29.916 Verification LBA range: start 0x0 length 0x2000 00:23:29.916 TLSTESTn1 : 10.05 1650.20 6.45 0.00 0.00 77291.53 15426.17 74206.92 00:23:29.916 =================================================================================================================== 00:23:29.916 Total : 1650.20 6.45 0.00 0.00 77291.53 15426.17 74206.92 00:23:29.916 0 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 470812 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 470812 ']' 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 470812 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 470812 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 470812' 00:23:29.916 killing process with pid 470812 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 470812 00:23:29.916 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.916 00:23:29.916 Latency(us) 00:23:29.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.916 =================================================================================================================== 00:23:29.916 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.916 [2024-07-25 12:36:01.249284] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 470812 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 470624 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 470624 ']' 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 470624 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 470624 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 470624' 00:23:29.916 killing process with pid 470624 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 470624 00:23:29.916 [2024-07-25 12:36:01.607065] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 470624 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=472870 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 472870 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 472870 ']' 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.916 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.916 [2024-07-25 12:36:01.874938] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:29.916 [2024-07-25 12:36:01.875008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.916 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.916 [2024-07-25 12:36:01.967218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.916 [2024-07-25 12:36:02.058263] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.916 [2024-07-25 12:36:02.058319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.916 [2024-07-25 12:36:02.058327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.916 [2024-07-25 12:36:02.058334] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.916 [2024-07-25 12:36:02.058339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.916 [2024-07-25 12:36:02.058366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.916 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.916 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:29.916 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.916 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:29.916 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.916 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.916 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.gFI8vgMpPO 00:23:29.916 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gFI8vgMpPO 00:23:29.916 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:29.916 [2024-07-25 12:36:02.973223] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.916 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:29.916 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:30.177 [2024-07-25 12:36:03.394302] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.177 [2024-07-25 12:36:03.394602] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.177 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:30.438 malloc0 00:23:30.438 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:30.699 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gFI8vgMpPO 00:23:30.699 [2024-07-25 12:36:04.042116] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:30.699 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:30.699 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=473219 00:23:30.699 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.699 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 473219 /var/tmp/bdevperf.sock 00:23:30.699 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 473219 ']' 00:23:30.699 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.699 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.699 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.699 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.699 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.699 [2024-07-25 12:36:04.116281] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:30.699 [2024-07-25 12:36:04.116361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473219 ] 00:23:30.960 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.960 [2024-07-25 12:36:04.203424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.960 [2024-07-25 12:36:04.311840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:31.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gFI8vgMpPO 00:23:31.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:32.200 [2024-07-25 12:36:05.432034] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.200 nvme0n1 00:23:32.200 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:32.459 Running I/O for 1 seconds... 00:23:33.396 00:23:33.396 Latency(us) 00:23:33.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.396 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:33.396 Verification LBA range: start 0x0 length 0x2000 00:23:33.396 nvme0n1 : 1.05 3370.71 13.17 0.00 0.00 37239.06 8620.50 45169.43 00:23:33.396 =================================================================================================================== 00:23:33.396 Total : 3370.71 13.17 0.00 0.00 37239.06 8620.50 45169.43 00:23:33.396 0 00:23:33.396 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 473219 00:23:33.396 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 473219 ']' 00:23:33.396 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 473219 00:23:33.396 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:33.396 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.396 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 473219 00:23:33.396 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:33.396 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:33.396 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 473219' 00:23:33.396 killing process with pid 473219 00:23:33.396 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 473219 00:23:33.396 Received shutdown signal, test time was about 1.000000 seconds 00:23:33.396 00:23:33.396 Latency(us) 00:23:33.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.396 =================================================================================================================== 00:23:33.396 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.396 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 473219 00:23:33.657 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 472870 00:23:33.657 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 472870 ']' 00:23:33.657 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 472870 00:23:33.657 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:33.657 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.657 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 472870 00:23:33.657 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:33.657 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:33.657 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 472870' 00:23:33.657 killing process with pid 472870 00:23:33.657 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 472870 00:23:33.657 [2024-07-25 12:36:07.034675] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:33.657 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 472870 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=473699 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 473699 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 473699 ']' 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.917 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.917 [2024-07-25 12:36:07.275915] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:33.917 [2024-07-25 12:36:07.275985] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.917 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.178 [2024-07-25 12:36:07.369613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.178 [2024-07-25 12:36:07.460050] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.178 [2024-07-25 12:36:07.460112] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.178 [2024-07-25 12:36:07.460120] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.178 [2024-07-25 12:36:07.460126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.178 [2024-07-25 12:36:07.460132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.178 [2024-07-25 12:36:07.460161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.748 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.748 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:34.748 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:34.748 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:34.748 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.008 [2024-07-25 12:36:08.195232] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.008 malloc0 00:23:35.008 [2024-07-25 12:36:08.225287] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.008 [2024-07-25 12:36:08.236804] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=473875 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 473875 /var/tmp/bdevperf.sock 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 473875 ']' 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.008 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.008 [2024-07-25 12:36:08.312005] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:35.008 [2024-07-25 12:36:08.312068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473875 ] 00:23:35.008 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.008 [2024-07-25 12:36:08.400554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.269 [2024-07-25 12:36:08.507297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.841 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.841 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:35.841 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gFI8vgMpPO 00:23:36.101 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:36.361 [2024-07-25 12:36:09.543042] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.361 nvme0n1 00:23:36.361 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:36.361 Running I/O for 1 seconds... 00:23:37.744 00:23:37.744 Latency(us) 00:23:37.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.744 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:37.744 Verification LBA range: start 0x0 length 0x2000 00:23:37.744 nvme0n1 : 1.05 3119.01 12.18 0.00 0.00 40317.83 7813.91 55655.19 00:23:37.744 =================================================================================================================== 00:23:37.744 Total : 3119.01 12.18 0.00 0.00 40317.83 7813.91 55655.19 00:23:37.744 0 00:23:37.744 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:23:37.744 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.744 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.744 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.744 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:23:37.744 "subsystems": [ 00:23:37.744 { 00:23:37.744 "subsystem": "keyring", 00:23:37.744 "config": [ 00:23:37.744 { 00:23:37.744 "method": "keyring_file_add_key", 00:23:37.744 "params": { 00:23:37.744 "name": "key0", 00:23:37.744 "path": "/tmp/tmp.gFI8vgMpPO" 00:23:37.744 } 00:23:37.744 } 00:23:37.744 ] 00:23:37.744 }, 00:23:37.744 { 00:23:37.744 "subsystem": "iobuf", 00:23:37.744 "config": [ 00:23:37.744 { 00:23:37.744 "method": "iobuf_set_options", 00:23:37.744 "params": { 00:23:37.744 "small_pool_count": 8192, 00:23:37.744 "large_pool_count": 1024, 00:23:37.744 "small_bufsize": 8192, 00:23:37.744 "large_bufsize": 135168 00:23:37.744 } 00:23:37.744 } 00:23:37.744 ] 00:23:37.744 }, 00:23:37.744 { 00:23:37.744 "subsystem": "sock", 00:23:37.744 "config": [ 00:23:37.744 { 00:23:37.744 "method": "sock_set_default_impl", 00:23:37.744 "params": { 00:23:37.744 "impl_name": "posix" 00:23:37.744 } 00:23:37.744 }, 00:23:37.744 { 00:23:37.744 "method": "sock_impl_set_options", 00:23:37.744 "params": { 00:23:37.744 "impl_name": "ssl", 00:23:37.744 "recv_buf_size": 4096, 00:23:37.744 "send_buf_size": 4096, 00:23:37.744 "enable_recv_pipe": true, 00:23:37.744 "enable_quickack": false, 00:23:37.744 "enable_placement_id": 0, 00:23:37.744 "enable_zerocopy_send_server": true, 00:23:37.744 "enable_zerocopy_send_client": false, 00:23:37.744 "zerocopy_threshold": 0, 00:23:37.744 "tls_version": 0, 00:23:37.744 "enable_ktls": false 00:23:37.744 } 00:23:37.744 }, 00:23:37.744 { 00:23:37.744 "method": "sock_impl_set_options", 00:23:37.744 "params": { 00:23:37.744 "impl_name": "posix", 00:23:37.744 "recv_buf_size": 2097152, 00:23:37.744 "send_buf_size": 2097152, 00:23:37.744 "enable_recv_pipe": true, 00:23:37.744 "enable_quickack": false, 00:23:37.744 "enable_placement_id": 0, 00:23:37.744 "enable_zerocopy_send_server": true, 00:23:37.744 "enable_zerocopy_send_client": false, 00:23:37.744 "zerocopy_threshold": 0, 00:23:37.744 "tls_version": 0, 00:23:37.744 "enable_ktls": false 00:23:37.744 } 00:23:37.744 } 00:23:37.744 ] 00:23:37.744 }, 00:23:37.744 { 00:23:37.744 "subsystem": "vmd", 00:23:37.744 "config": [] 00:23:37.744 }, 00:23:37.744 { 00:23:37.744 "subsystem": "accel", 00:23:37.744 "config": [ 00:23:37.744 { 00:23:37.744 "method": "accel_set_options", 00:23:37.744 "params": { 00:23:37.744 "small_cache_size": 128, 00:23:37.744 "large_cache_size": 16, 00:23:37.744 "task_count": 2048, 00:23:37.744 "sequence_count": 2048, 00:23:37.744 "buf_count": 2048 00:23:37.744 } 00:23:37.744 } 00:23:37.744 ] 00:23:37.744 }, 00:23:37.744 { 00:23:37.744 "subsystem": "bdev", 00:23:37.744 "config": [ 00:23:37.744 { 00:23:37.744 "method": "bdev_set_options", 00:23:37.744 "params": { 00:23:37.744 "bdev_io_pool_size": 65535, 00:23:37.744 "bdev_io_cache_size": 256, 00:23:37.744 "bdev_auto_examine": true, 00:23:37.744 "iobuf_small_cache_size": 128, 00:23:37.744 "iobuf_large_cache_size": 16 00:23:37.744 } 00:23:37.744 }, 00:23:37.744 { 00:23:37.744 "method": "bdev_raid_set_options", 00:23:37.744 "params": { 00:23:37.744 "process_window_size_kb": 1024, 00:23:37.744 "process_max_bandwidth_mb_sec": 0 00:23:37.744 } 00:23:37.744 }, 00:23:37.744 { 00:23:37.744 "method": "bdev_iscsi_set_options", 00:23:37.744 "params": { 00:23:37.744 "timeout_sec": 30 00:23:37.744 } 00:23:37.744 }, 00:23:37.744 { 00:23:37.744 "method": "bdev_nvme_set_options", 00:23:37.744 "params": { 00:23:37.744 "action_on_timeout": "none", 00:23:37.744 "timeout_us": 0, 00:23:37.744 "timeout_admin_us": 0, 00:23:37.744 "keep_alive_timeout_ms": 10000, 00:23:37.744 "arbitration_burst": 0, 00:23:37.744 "low_priority_weight": 0, 00:23:37.744 "medium_priority_weight": 0, 00:23:37.744 "high_priority_weight": 0, 00:23:37.744 "nvme_adminq_poll_period_us": 10000, 00:23:37.744 "nvme_ioq_poll_period_us": 0, 00:23:37.744 "io_queue_requests": 0, 00:23:37.744 "delay_cmd_submit": true, 00:23:37.744 "transport_retry_count": 4, 00:23:37.744 "bdev_retry_count": 3, 00:23:37.744 "transport_ack_timeout": 0, 00:23:37.744 "ctrlr_loss_timeout_sec": 0, 00:23:37.744 "reconnect_delay_sec": 0, 00:23:37.744 "fast_io_fail_timeout_sec": 0, 00:23:37.744 "disable_auto_failback": false, 00:23:37.745 "generate_uuids": false, 00:23:37.745 "transport_tos": 0, 00:23:37.745 "nvme_error_stat": false, 00:23:37.745 "rdma_srq_size": 0, 00:23:37.745 "io_path_stat": false, 00:23:37.745 "allow_accel_sequence": false, 00:23:37.745 "rdma_max_cq_size": 0, 00:23:37.745 "rdma_cm_event_timeout_ms": 0, 00:23:37.745 "dhchap_digests": [ 00:23:37.745 "sha256", 00:23:37.745 "sha384", 00:23:37.745 "sha512" 00:23:37.745 ], 00:23:37.745 "dhchap_dhgroups": [ 00:23:37.745 "null", 00:23:37.745 "ffdhe2048", 00:23:37.745 "ffdhe3072", 00:23:37.745 "ffdhe4096", 00:23:37.745 "ffdhe6144", 00:23:37.745 "ffdhe8192" 00:23:37.745 ] 00:23:37.745 } 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "method": "bdev_nvme_set_hotplug", 00:23:37.745 "params": { 00:23:37.745 "period_us": 100000, 00:23:37.745 "enable": false 00:23:37.745 } 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "method": "bdev_malloc_create", 00:23:37.745 "params": { 00:23:37.745 "name": "malloc0", 00:23:37.745 "num_blocks": 8192, 00:23:37.745 "block_size": 4096, 00:23:37.745 "physical_block_size": 4096, 00:23:37.745 "uuid": "42278b92-4cdb-46eb-827a-9ce8a730fefb", 00:23:37.745 "optimal_io_boundary": 0, 00:23:37.745 "md_size": 0, 00:23:37.745 "dif_type": 0, 00:23:37.745 "dif_is_head_of_md": false, 00:23:37.745 "dif_pi_format": 0 00:23:37.745 } 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "method": "bdev_wait_for_examine" 00:23:37.745 } 00:23:37.745 ] 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "subsystem": "nbd", 00:23:37.745 "config": [] 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "subsystem": "scheduler", 00:23:37.745 "config": [ 00:23:37.745 { 00:23:37.745 "method": "framework_set_scheduler", 00:23:37.745 "params": { 00:23:37.745 "name": "static" 00:23:37.745 } 00:23:37.745 } 00:23:37.745 ] 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "subsystem": "nvmf", 00:23:37.745 "config": [ 00:23:37.745 { 00:23:37.745 "method": "nvmf_set_config", 00:23:37.745 "params": { 00:23:37.745 "discovery_filter": "match_any", 00:23:37.745 "admin_cmd_passthru": { 00:23:37.745 "identify_ctrlr": false 00:23:37.745 } 00:23:37.745 } 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "method": "nvmf_set_max_subsystems", 00:23:37.745 "params": { 00:23:37.745 "max_subsystems": 1024 00:23:37.745 } 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "method": "nvmf_set_crdt", 00:23:37.745 "params": { 00:23:37.745 "crdt1": 0, 00:23:37.745 "crdt2": 0, 00:23:37.745 "crdt3": 0 00:23:37.745 } 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "method": "nvmf_create_transport", 00:23:37.745 "params": { 00:23:37.745 "trtype": "TCP", 00:23:37.745 "max_queue_depth": 128, 00:23:37.745 "max_io_qpairs_per_ctrlr": 127, 00:23:37.745 "in_capsule_data_size": 4096, 00:23:37.745 "max_io_size": 131072, 00:23:37.745 "io_unit_size": 131072, 00:23:37.745 "max_aq_depth": 128, 00:23:37.745 "num_shared_buffers": 511, 00:23:37.745 "buf_cache_size": 4294967295, 00:23:37.745 "dif_insert_or_strip": false, 00:23:37.745 "zcopy": false, 00:23:37.745 "c2h_success": false, 00:23:37.745 "sock_priority": 0, 00:23:37.745 "abort_timeout_sec": 1, 00:23:37.745 "ack_timeout": 0, 00:23:37.745 "data_wr_pool_size": 0 00:23:37.745 } 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "method": "nvmf_create_subsystem", 00:23:37.745 "params": { 00:23:37.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.745 "allow_any_host": false, 00:23:37.745 "serial_number": "00000000000000000000", 00:23:37.745 "model_number": "SPDK bdev Controller", 00:23:37.745 "max_namespaces": 32, 00:23:37.745 "min_cntlid": 1, 00:23:37.745 "max_cntlid": 65519, 00:23:37.745 "ana_reporting": false 00:23:37.745 } 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "method": "nvmf_subsystem_add_host", 00:23:37.745 "params": { 00:23:37.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.745 "host": "nqn.2016-06.io.spdk:host1", 00:23:37.745 "psk": "key0" 00:23:37.745 } 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "method": "nvmf_subsystem_add_ns", 00:23:37.745 "params": { 00:23:37.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.745 "namespace": { 00:23:37.745 "nsid": 1, 00:23:37.745 "bdev_name": "malloc0", 00:23:37.745 "nguid": "42278B924CDB46EB827A9CE8A730FEFB", 00:23:37.745 "uuid": "42278b92-4cdb-46eb-827a-9ce8a730fefb", 00:23:37.745 "no_auto_visible": false 00:23:37.745 } 00:23:37.745 } 00:23:37.745 }, 00:23:37.745 { 00:23:37.745 "method": "nvmf_subsystem_add_listener", 00:23:37.745 "params": { 00:23:37.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.745 "listen_address": { 00:23:37.745 "trtype": "TCP", 00:23:37.745 "adrfam": "IPv4", 00:23:37.745 "traddr": "10.0.0.2", 00:23:37.745 "trsvcid": "4420" 00:23:37.745 }, 00:23:37.745 "secure_channel": false, 00:23:37.745 "sock_impl": "ssl" 00:23:37.745 } 00:23:37.745 } 00:23:37.745 ] 00:23:37.745 } 00:23:37.745 ] 00:23:37.745 }' 00:23:37.745 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:38.006 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:23:38.006 "subsystems": [ 00:23:38.006 { 00:23:38.006 "subsystem": "keyring", 00:23:38.006 "config": [ 00:23:38.006 { 00:23:38.006 "method": "keyring_file_add_key", 00:23:38.006 "params": { 00:23:38.006 "name": "key0", 00:23:38.006 "path": "/tmp/tmp.gFI8vgMpPO" 00:23:38.006 } 00:23:38.006 } 00:23:38.006 ] 00:23:38.006 }, 00:23:38.006 { 00:23:38.006 "subsystem": "iobuf", 00:23:38.006 "config": [ 00:23:38.006 { 00:23:38.006 "method": "iobuf_set_options", 00:23:38.006 "params": { 00:23:38.006 "small_pool_count": 8192, 00:23:38.006 "large_pool_count": 1024, 00:23:38.006 "small_bufsize": 8192, 00:23:38.006 "large_bufsize": 135168 00:23:38.006 } 00:23:38.006 } 00:23:38.006 ] 00:23:38.006 }, 00:23:38.006 { 00:23:38.006 "subsystem": "sock", 00:23:38.006 "config": [ 00:23:38.006 { 00:23:38.006 "method": "sock_set_default_impl", 00:23:38.006 "params": { 00:23:38.006 "impl_name": "posix" 00:23:38.006 } 00:23:38.006 }, 00:23:38.006 { 00:23:38.006 "method": "sock_impl_set_options", 00:23:38.006 "params": { 00:23:38.006 "impl_name": "ssl", 00:23:38.006 "recv_buf_size": 4096, 00:23:38.006 "send_buf_size": 4096, 00:23:38.006 "enable_recv_pipe": true, 00:23:38.006 "enable_quickack": false, 00:23:38.006 "enable_placement_id": 0, 00:23:38.006 "enable_zerocopy_send_server": true, 00:23:38.006 "enable_zerocopy_send_client": false, 00:23:38.006 "zerocopy_threshold": 0, 00:23:38.006 "tls_version": 0, 00:23:38.006 "enable_ktls": false 00:23:38.006 } 00:23:38.006 }, 00:23:38.006 { 00:23:38.006 "method": "sock_impl_set_options", 00:23:38.006 "params": { 00:23:38.006 "impl_name": "posix", 00:23:38.006 "recv_buf_size": 2097152, 00:23:38.006 "send_buf_size": 2097152, 00:23:38.006 "enable_recv_pipe": true, 00:23:38.006 "enable_quickack": false, 00:23:38.006 "enable_placement_id": 0, 00:23:38.006 "enable_zerocopy_send_server": true, 00:23:38.006 "enable_zerocopy_send_client": false, 00:23:38.006 "zerocopy_threshold": 0, 00:23:38.006 "tls_version": 0, 00:23:38.006 "enable_ktls": false 00:23:38.006 } 00:23:38.006 } 00:23:38.006 ] 00:23:38.006 }, 00:23:38.006 { 00:23:38.006 "subsystem": "vmd", 00:23:38.006 "config": [] 00:23:38.006 }, 00:23:38.006 { 00:23:38.007 "subsystem": "accel", 00:23:38.007 "config": [ 00:23:38.007 { 00:23:38.007 "method": "accel_set_options", 00:23:38.007 "params": { 00:23:38.007 "small_cache_size": 128, 00:23:38.007 "large_cache_size": 16, 00:23:38.007 "task_count": 2048, 00:23:38.007 "sequence_count": 2048, 00:23:38.007 "buf_count": 2048 00:23:38.007 } 00:23:38.007 } 00:23:38.007 ] 00:23:38.007 }, 00:23:38.007 { 00:23:38.007 "subsystem": "bdev", 00:23:38.007 "config": [ 00:23:38.007 { 00:23:38.007 "method": "bdev_set_options", 00:23:38.007 "params": { 00:23:38.007 "bdev_io_pool_size": 65535, 00:23:38.007 "bdev_io_cache_size": 256, 00:23:38.007 "bdev_auto_examine": true, 00:23:38.007 "iobuf_small_cache_size": 128, 00:23:38.007 "iobuf_large_cache_size": 16 00:23:38.007 } 00:23:38.007 }, 00:23:38.007 { 00:23:38.007 "method": "bdev_raid_set_options", 00:23:38.007 "params": { 00:23:38.007 "process_window_size_kb": 1024, 00:23:38.007 "process_max_bandwidth_mb_sec": 0 00:23:38.007 } 00:23:38.007 }, 00:23:38.007 { 00:23:38.007 "method": "bdev_iscsi_set_options", 00:23:38.007 "params": { 00:23:38.007 "timeout_sec": 30 00:23:38.007 } 00:23:38.007 }, 00:23:38.007 { 00:23:38.007 "method": "bdev_nvme_set_options", 00:23:38.007 "params": { 00:23:38.007 "action_on_timeout": "none", 00:23:38.007 "timeout_us": 0, 00:23:38.007 "timeout_admin_us": 0, 00:23:38.007 "keep_alive_timeout_ms": 10000, 00:23:38.007 "arbitration_burst": 0, 00:23:38.007 "low_priority_weight": 0, 00:23:38.007 "medium_priority_weight": 0, 00:23:38.007 "high_priority_weight": 0, 00:23:38.007 "nvme_adminq_poll_period_us": 10000, 00:23:38.007 "nvme_ioq_poll_period_us": 0, 00:23:38.007 "io_queue_requests": 512, 00:23:38.007 "delay_cmd_submit": true, 00:23:38.007 "transport_retry_count": 4, 00:23:38.007 "bdev_retry_count": 3, 00:23:38.007 "transport_ack_timeout": 0, 00:23:38.007 "ctrlr_loss_timeout_sec": 0, 00:23:38.007 "reconnect_delay_sec": 0, 00:23:38.007 "fast_io_fail_timeout_sec": 0, 00:23:38.007 "disable_auto_failback": false, 00:23:38.007 "generate_uuids": false, 00:23:38.007 "transport_tos": 0, 00:23:38.007 "nvme_error_stat": false, 00:23:38.007 "rdma_srq_size": 0, 00:23:38.007 "io_path_stat": false, 00:23:38.007 "allow_accel_sequence": false, 00:23:38.007 "rdma_max_cq_size": 0, 00:23:38.007 "rdma_cm_event_timeout_ms": 0, 00:23:38.007 "dhchap_digests": [ 00:23:38.007 "sha256", 00:23:38.007 "sha384", 00:23:38.007 "sha512" 00:23:38.007 ], 00:23:38.007 "dhchap_dhgroups": [ 00:23:38.007 "null", 00:23:38.007 "ffdhe2048", 00:23:38.007 "ffdhe3072", 00:23:38.007 "ffdhe4096", 00:23:38.007 "ffdhe6144", 00:23:38.007 "ffdhe8192" 00:23:38.007 ] 00:23:38.007 } 00:23:38.007 }, 00:23:38.007 { 00:23:38.007 "method": "bdev_nvme_attach_controller", 00:23:38.007 "params": { 00:23:38.007 "name": "nvme0", 00:23:38.007 "trtype": "TCP", 00:23:38.007 "adrfam": "IPv4", 00:23:38.007 "traddr": "10.0.0.2", 00:23:38.007 "trsvcid": "4420", 00:23:38.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.007 "prchk_reftag": false, 00:23:38.007 "prchk_guard": false, 00:23:38.007 "ctrlr_loss_timeout_sec": 0, 00:23:38.007 "reconnect_delay_sec": 0, 00:23:38.007 "fast_io_fail_timeout_sec": 0, 00:23:38.007 "psk": "key0", 00:23:38.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.007 "hdgst": false, 00:23:38.007 "ddgst": false 00:23:38.007 } 00:23:38.007 }, 00:23:38.007 { 00:23:38.007 "method": "bdev_nvme_set_hotplug", 00:23:38.007 "params": { 00:23:38.007 "period_us": 100000, 00:23:38.007 "enable": false 00:23:38.007 } 00:23:38.007 }, 00:23:38.007 { 00:23:38.007 "method": "bdev_enable_histogram", 00:23:38.007 "params": { 00:23:38.007 "name": "nvme0n1", 00:23:38.007 "enable": true 00:23:38.007 } 00:23:38.007 }, 00:23:38.007 { 00:23:38.007 "method": "bdev_wait_for_examine" 00:23:38.007 } 00:23:38.007 ] 00:23:38.007 }, 00:23:38.007 { 00:23:38.007 "subsystem": "nbd", 00:23:38.007 "config": [] 00:23:38.007 } 00:23:38.007 ] 00:23:38.007 }' 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 473875 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 473875 ']' 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 473875 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 473875 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 473875' 00:23:38.007 killing process with pid 473875 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 473875 00:23:38.007 Received shutdown signal, test time was about 1.000000 seconds 00:23:38.007 00:23:38.007 Latency(us) 00:23:38.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.007 =================================================================================================================== 00:23:38.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 473875 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 473699 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 473699 ']' 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 473699 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.007 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 473699 00:23:38.268 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:38.268 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:38.268 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 473699' 00:23:38.268 killing process with pid 473699 00:23:38.268 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 473699 00:23:38.268 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 473699 00:23:38.268 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:23:38.268 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.268 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:38.268 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.268 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:23:38.268 "subsystems": [ 00:23:38.268 { 00:23:38.268 "subsystem": "keyring", 00:23:38.268 "config": [ 00:23:38.268 { 00:23:38.268 "method": "keyring_file_add_key", 00:23:38.268 "params": { 00:23:38.268 "name": "key0", 00:23:38.268 "path": "/tmp/tmp.gFI8vgMpPO" 00:23:38.268 } 00:23:38.268 } 00:23:38.268 ] 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "subsystem": "iobuf", 00:23:38.268 "config": [ 00:23:38.268 { 00:23:38.268 "method": "iobuf_set_options", 00:23:38.268 "params": { 00:23:38.268 "small_pool_count": 8192, 00:23:38.268 "large_pool_count": 1024, 00:23:38.268 "small_bufsize": 8192, 00:23:38.268 "large_bufsize": 135168 00:23:38.268 } 00:23:38.268 } 00:23:38.268 ] 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "subsystem": "sock", 00:23:38.268 "config": [ 00:23:38.268 { 00:23:38.268 "method": "sock_set_default_impl", 00:23:38.268 "params": { 00:23:38.268 "impl_name": "posix" 00:23:38.268 } 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "method": "sock_impl_set_options", 00:23:38.268 "params": { 00:23:38.268 "impl_name": "ssl", 00:23:38.268 "recv_buf_size": 4096, 00:23:38.268 "send_buf_size": 4096, 00:23:38.268 "enable_recv_pipe": true, 00:23:38.268 "enable_quickack": false, 00:23:38.268 "enable_placement_id": 0, 00:23:38.268 "enable_zerocopy_send_server": true, 00:23:38.268 "enable_zerocopy_send_client": false, 00:23:38.268 "zerocopy_threshold": 0, 00:23:38.268 "tls_version": 0, 00:23:38.268 "enable_ktls": false 00:23:38.268 } 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "method": "sock_impl_set_options", 00:23:38.268 "params": { 00:23:38.268 "impl_name": "posix", 00:23:38.268 "recv_buf_size": 2097152, 00:23:38.268 "send_buf_size": 2097152, 00:23:38.268 "enable_recv_pipe": true, 00:23:38.268 "enable_quickack": false, 00:23:38.268 "enable_placement_id": 0, 00:23:38.268 "enable_zerocopy_send_server": true, 00:23:38.268 "enable_zerocopy_send_client": false, 00:23:38.268 "zerocopy_threshold": 0, 00:23:38.268 "tls_version": 0, 00:23:38.268 "enable_ktls": false 00:23:38.268 } 00:23:38.268 } 00:23:38.268 ] 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "subsystem": "vmd", 00:23:38.268 "config": [] 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "subsystem": "accel", 00:23:38.268 "config": [ 00:23:38.268 { 00:23:38.268 "method": "accel_set_options", 00:23:38.268 "params": { 00:23:38.268 "small_cache_size": 128, 00:23:38.268 "large_cache_size": 16, 00:23:38.268 "task_count": 2048, 00:23:38.268 "sequence_count": 2048, 00:23:38.268 "buf_count": 2048 00:23:38.268 } 00:23:38.268 } 00:23:38.268 ] 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "subsystem": "bdev", 00:23:38.268 "config": [ 00:23:38.268 { 00:23:38.268 "method": "bdev_set_options", 00:23:38.268 "params": { 00:23:38.268 "bdev_io_pool_size": 65535, 00:23:38.268 "bdev_io_cache_size": 256, 00:23:38.268 "bdev_auto_examine": true, 00:23:38.268 "iobuf_small_cache_size": 128, 00:23:38.268 "iobuf_large_cache_size": 16 00:23:38.268 } 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "method": "bdev_raid_set_options", 00:23:38.268 "params": { 00:23:38.268 "process_window_size_kb": 1024, 00:23:38.268 "process_max_bandwidth_mb_sec": 0 00:23:38.268 } 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "method": "bdev_iscsi_set_options", 00:23:38.268 "params": { 00:23:38.268 "timeout_sec": 30 00:23:38.268 } 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "method": "bdev_nvme_set_options", 00:23:38.268 "params": { 00:23:38.268 "action_on_timeout": "none", 00:23:38.268 "timeout_us": 0, 00:23:38.268 "timeout_admin_us": 0, 00:23:38.268 "keep_alive_timeout_ms": 10000, 00:23:38.268 "arbitration_burst": 0, 00:23:38.268 "low_priority_weight": 0, 00:23:38.268 "medium_priority_weight": 0, 00:23:38.268 "high_priority_weight": 0, 00:23:38.268 "nvme_adminq_poll_period_us": 10000, 00:23:38.268 "nvme_ioq_poll_period_us": 0, 00:23:38.268 "io_queue_requests": 0, 00:23:38.268 "delay_cmd_submit": true, 00:23:38.268 "transport_retry_count": 4, 00:23:38.268 "bdev_retry_count": 3, 00:23:38.268 "transport_ack_timeout": 0, 00:23:38.268 "ctrlr_loss_timeout_sec": 0, 00:23:38.268 "reconnect_delay_sec": 0, 00:23:38.268 "fast_io_fail_timeout_sec": 0, 00:23:38.268 "disable_auto_failback": false, 00:23:38.268 "generate_uuids": false, 00:23:38.268 "transport_tos": 0, 00:23:38.268 "nvme_error_stat": false, 00:23:38.268 "rdma_srq_size": 0, 00:23:38.268 "io_path_stat": false, 00:23:38.268 "allow_accel_sequence": false, 00:23:38.268 "rdma_max_cq_size": 0, 00:23:38.268 "rdma_cm_event_timeout_ms": 0, 00:23:38.268 "dhchap_digests": [ 00:23:38.268 "sha256", 00:23:38.268 "sha384", 00:23:38.268 "sha512" 00:23:38.268 ], 00:23:38.268 "dhchap_dhgroups": [ 00:23:38.268 "null", 00:23:38.268 "ffdhe2048", 00:23:38.268 "ffdhe3072", 00:23:38.268 "ffdhe4096", 00:23:38.268 "ffdhe6144", 00:23:38.268 "ffdhe8192" 00:23:38.268 ] 00:23:38.268 } 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "method": "bdev_nvme_set_hotplug", 00:23:38.268 "params": { 00:23:38.268 "period_us": 100000, 00:23:38.268 "enable": false 00:23:38.268 } 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "method": "bdev_malloc_create", 00:23:38.268 "params": { 00:23:38.268 "name": "malloc0", 00:23:38.268 "num_blocks": 8192, 00:23:38.268 "block_size": 4096, 00:23:38.268 "physical_block_size": 4096, 00:23:38.268 "uuid": "42278b92-4cdb-46eb-827a-9ce8a730fefb", 00:23:38.268 "optimal_io_boundary": 0, 00:23:38.268 "md_size": 0, 00:23:38.268 "dif_type": 0, 00:23:38.268 "dif_is_head_of_md": false, 00:23:38.268 "dif_pi_format": 0 00:23:38.268 } 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "method": "bdev_wait_for_examine" 00:23:38.268 } 00:23:38.268 ] 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "subsystem": "nbd", 00:23:38.268 "config": [] 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "subsystem": "scheduler", 00:23:38.268 "config": [ 00:23:38.268 { 00:23:38.268 "method": "framework_set_scheduler", 00:23:38.268 "params": { 00:23:38.268 "name": "static" 00:23:38.268 } 00:23:38.268 } 00:23:38.268 ] 00:23:38.268 }, 00:23:38.268 { 00:23:38.268 "subsystem": "nvmf", 00:23:38.268 "config": [ 00:23:38.268 { 00:23:38.268 "method": "nvmf_set_config", 00:23:38.269 "params": { 00:23:38.269 "discovery_filter": "match_any", 00:23:38.269 "admin_cmd_passthru": { 00:23:38.269 "identify_ctrlr": false 00:23:38.269 } 00:23:38.269 } 00:23:38.269 }, 00:23:38.269 { 00:23:38.269 "method": "nvmf_set_max_subsystems", 00:23:38.269 "params": { 00:23:38.269 "max_subsystems": 1024 00:23:38.269 } 00:23:38.269 }, 00:23:38.269 { 00:23:38.269 "method": "nvmf_set_crdt", 00:23:38.269 "params": { 00:23:38.269 "crdt1": 0, 00:23:38.269 "crdt2": 0, 00:23:38.269 "crdt3": 0 00:23:38.269 } 00:23:38.269 }, 00:23:38.269 { 00:23:38.269 "method": "nvmf_create_transport", 00:23:38.269 "params": { 00:23:38.269 "trtype": "TCP", 00:23:38.269 "max_queue_depth": 128, 00:23:38.269 "max_io_qpairs_per_ctrlr": 127, 00:23:38.269 "in_capsule_data_size": 4096, 00:23:38.269 "max_io_size": 131072, 00:23:38.269 "io_unit_size": 131072, 00:23:38.269 "max_aq_depth": 128, 00:23:38.269 "num_shared_buffers": 511, 00:23:38.269 "buf_cache_size": 4294967295, 00:23:38.269 "dif_insert_or_strip": false, 00:23:38.269 "zcopy": false, 00:23:38.269 "c2h_success": false, 00:23:38.269 "sock_priority": 0, 00:23:38.269 "abort_timeout_sec": 1, 00:23:38.269 "ack_timeout": 0, 00:23:38.269 "data_wr_pool_size": 0 00:23:38.269 } 00:23:38.269 }, 00:23:38.269 { 00:23:38.269 "method": "nvmf_create_subsystem", 00:23:38.269 "params": { 00:23:38.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.269 "allow_any_host": false, 00:23:38.269 "serial_number": "00000000000000000000", 00:23:38.269 "model_number": "SPDK bdev Controller", 00:23:38.269 "max_namespaces": 32, 00:23:38.269 "min_cntlid": 1, 00:23:38.269 "max_cntlid": 65519, 00:23:38.269 "ana_reporting": false 00:23:38.269 } 00:23:38.269 }, 00:23:38.269 { 00:23:38.269 "method": "nvmf_subsystem_add_host", 00:23:38.269 "params": { 00:23:38.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.269 "host": "nqn.2016-06.io.spdk:host1", 00:23:38.269 "psk": "key0" 00:23:38.269 } 00:23:38.269 }, 00:23:38.269 { 00:23:38.269 "method": "nvmf_subsystem_add_ns", 00:23:38.269 "params": { 00:23:38.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.269 "namespace": { 00:23:38.269 "nsid": 1, 00:23:38.269 "bdev_name": "malloc0", 00:23:38.269 "nguid": "42278B924CDB46EB827A9CE8A730FEFB", 00:23:38.269 "uuid": "42278b92-4cdb-46eb-827a-9ce8a730fefb", 00:23:38.269 "no_auto_visible": false 00:23:38.269 } 00:23:38.269 } 00:23:38.269 }, 00:23:38.269 { 00:23:38.269 "method": "nvmf_subsystem_add_listener", 00:23:38.269 "params": { 00:23:38.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.269 "listen_address": { 00:23:38.269 "trtype": "TCP", 00:23:38.269 "adrfam": "IPv4", 00:23:38.269 "traddr": "10.0.0.2", 00:23:38.269 "trsvcid": "4420" 00:23:38.269 }, 00:23:38.269 "secure_channel": false, 00:23:38.269 "sock_impl": "ssl" 00:23:38.269 } 00:23:38.269 } 00:23:38.269 ] 00:23:38.269 } 00:23:38.269 ] 00:23:38.269 }' 00:23:38.269 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=474935 00:23:38.269 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 474935 00:23:38.269 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:38.269 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 474935 ']' 00:23:38.269 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.269 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.269 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.269 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.269 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.269 [2024-07-25 12:36:11.676229] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:38.269 [2024-07-25 12:36:11.676298] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.529 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.529 [2024-07-25 12:36:11.763326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.529 [2024-07-25 12:36:11.824395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.529 [2024-07-25 12:36:11.824429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.529 [2024-07-25 12:36:11.824437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.529 [2024-07-25 12:36:11.824443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.529 [2024-07-25 12:36:11.824448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.529 [2024-07-25 12:36:11.824496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.789 [2024-07-25 12:36:12.019269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.789 [2024-07-25 12:36:12.060576] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.790 [2024-07-25 12:36:12.060778] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=474995 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 474995 /var/tmp/bdevperf.sock 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 474995 ']' 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.429 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:23:39.429 "subsystems": [ 00:23:39.429 { 00:23:39.429 "subsystem": "keyring", 00:23:39.429 "config": [ 00:23:39.429 { 00:23:39.429 "method": "keyring_file_add_key", 00:23:39.429 "params": { 00:23:39.429 "name": "key0", 00:23:39.429 "path": "/tmp/tmp.gFI8vgMpPO" 00:23:39.429 } 00:23:39.429 } 00:23:39.429 ] 00:23:39.429 }, 00:23:39.429 { 00:23:39.429 "subsystem": "iobuf", 00:23:39.429 "config": [ 00:23:39.429 { 00:23:39.429 "method": "iobuf_set_options", 00:23:39.429 "params": { 00:23:39.429 "small_pool_count": 8192, 00:23:39.429 "large_pool_count": 1024, 00:23:39.429 "small_bufsize": 8192, 00:23:39.429 "large_bufsize": 135168 00:23:39.429 } 00:23:39.429 } 00:23:39.429 ] 00:23:39.429 }, 00:23:39.429 { 00:23:39.429 "subsystem": "sock", 00:23:39.429 "config": [ 00:23:39.429 { 00:23:39.429 "method": "sock_set_default_impl", 00:23:39.429 "params": { 00:23:39.429 "impl_name": "posix" 00:23:39.429 } 00:23:39.429 }, 00:23:39.429 { 00:23:39.429 "method": "sock_impl_set_options", 00:23:39.429 "params": { 00:23:39.429 "impl_name": "ssl", 00:23:39.429 "recv_buf_size": 4096, 00:23:39.429 "send_buf_size": 4096, 00:23:39.429 "enable_recv_pipe": true, 00:23:39.429 "enable_quickack": false, 00:23:39.429 "enable_placement_id": 0, 00:23:39.429 "enable_zerocopy_send_server": true, 00:23:39.429 "enable_zerocopy_send_client": false, 00:23:39.429 "zerocopy_threshold": 0, 00:23:39.429 "tls_version": 0, 00:23:39.429 "enable_ktls": false 00:23:39.429 } 00:23:39.429 }, 00:23:39.429 { 00:23:39.429 "method": "sock_impl_set_options", 00:23:39.429 "params": { 00:23:39.429 "impl_name": "posix", 00:23:39.429 "recv_buf_size": 2097152, 00:23:39.429 "send_buf_size": 2097152, 00:23:39.429 "enable_recv_pipe": true, 00:23:39.429 "enable_quickack": false, 00:23:39.429 "enable_placement_id": 0, 00:23:39.429 "enable_zerocopy_send_server": true, 00:23:39.429 "enable_zerocopy_send_client": false, 00:23:39.430 "zerocopy_threshold": 0, 00:23:39.430 "tls_version": 0, 00:23:39.430 "enable_ktls": false 00:23:39.430 } 00:23:39.430 } 00:23:39.430 ] 00:23:39.430 }, 00:23:39.430 { 00:23:39.430 "subsystem": "vmd", 00:23:39.430 "config": [] 00:23:39.430 }, 00:23:39.430 { 00:23:39.430 "subsystem": "accel", 00:23:39.430 "config": [ 00:23:39.430 { 00:23:39.430 "method": "accel_set_options", 00:23:39.430 "params": { 00:23:39.430 "small_cache_size": 128, 00:23:39.430 "large_cache_size": 16, 00:23:39.430 "task_count": 2048, 00:23:39.430 "sequence_count": 2048, 00:23:39.430 "buf_count": 2048 00:23:39.430 } 00:23:39.430 } 00:23:39.430 ] 00:23:39.430 }, 00:23:39.430 { 00:23:39.430 "subsystem": "bdev", 00:23:39.430 "config": [ 00:23:39.430 { 00:23:39.430 "method": "bdev_set_options", 00:23:39.430 "params": { 00:23:39.430 "bdev_io_pool_size": 65535, 00:23:39.430 "bdev_io_cache_size": 256, 00:23:39.430 "bdev_auto_examine": true, 00:23:39.430 "iobuf_small_cache_size": 128, 00:23:39.430 "iobuf_large_cache_size": 16 00:23:39.430 } 00:23:39.430 }, 00:23:39.430 { 00:23:39.430 "method": "bdev_raid_set_options", 00:23:39.430 "params": { 00:23:39.430 "process_window_size_kb": 1024, 00:23:39.430 "process_max_bandwidth_mb_sec": 0 00:23:39.430 } 00:23:39.430 }, 00:23:39.430 { 00:23:39.430 "method": "bdev_iscsi_set_options", 00:23:39.430 "params": { 00:23:39.430 "timeout_sec": 30 00:23:39.430 } 00:23:39.430 }, 00:23:39.430 { 00:23:39.430 "method": "bdev_nvme_set_options", 00:23:39.430 "params": { 00:23:39.430 "action_on_timeout": "none", 00:23:39.430 "timeout_us": 0, 00:23:39.430 "timeout_admin_us": 0, 00:23:39.430 "keep_alive_timeout_ms": 10000, 00:23:39.430 "arbitration_burst": 0, 00:23:39.430 "low_priority_weight": 0, 00:23:39.430 "medium_priority_weight": 0, 00:23:39.430 "high_priority_weight": 0, 00:23:39.430 "nvme_adminq_poll_period_us": 10000, 00:23:39.430 "nvme_ioq_poll_period_us": 0, 00:23:39.430 "io_queue_requests": 512, 00:23:39.430 "delay_cmd_submit": true, 00:23:39.430 "transport_retry_count": 4, 00:23:39.430 "bdev_retry_count": 3, 00:23:39.430 "transport_ack_timeout": 0, 00:23:39.430 "ctrlr_loss_timeout_sec": 0, 00:23:39.430 "reconnect_delay_sec": 0, 00:23:39.430 "fast_io_fail_timeout_sec": 0, 00:23:39.430 "disable_auto_failback": false, 00:23:39.430 "generate_uuids": false, 00:23:39.430 "transport_tos": 0, 00:23:39.430 "nvme_error_stat": false, 00:23:39.430 "rdma_srq_size": 0, 00:23:39.430 "io_path_stat": false, 00:23:39.430 "allow_accel_sequence": false, 00:23:39.430 "rdma_max_cq_size": 0, 00:23:39.430 "rdma_cm_event_timeout_ms": 0, 00:23:39.430 "dhchap_digests": [ 00:23:39.430 "sha256", 00:23:39.430 "sha384", 00:23:39.430 "sha512" 00:23:39.430 ], 00:23:39.430 "dhchap_dhgroups": [ 00:23:39.430 "null", 00:23:39.430 "ffdhe2048", 00:23:39.430 "ffdhe3072", 00:23:39.430 "ffdhe4096", 00:23:39.430 "ffdhe6144", 00:23:39.430 "ffdhe8192" 00:23:39.430 ] 00:23:39.430 } 00:23:39.430 }, 00:23:39.430 { 00:23:39.430 "method": "bdev_nvme_attach_controller", 00:23:39.430 "params": { 00:23:39.430 "name": "nvme0", 00:23:39.430 "trtype": "TCP", 00:23:39.430 "adrfam": "IPv4", 00:23:39.430 "traddr": "10.0.0.2", 00:23:39.430 "trsvcid": "4420", 00:23:39.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.430 "prchk_reftag": false, 00:23:39.430 "prchk_guard": false, 00:23:39.430 "ctrlr_loss_timeout_sec": 0, 00:23:39.430 "reconnect_delay_sec": 0, 00:23:39.430 "fast_io_fail_timeout_sec": 0, 00:23:39.430 "psk": "key0", 00:23:39.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.430 "hdgst": false, 00:23:39.430 "ddgst": false 00:23:39.430 } 00:23:39.430 }, 00:23:39.430 { 00:23:39.430 "method": "bdev_nvme_set_hotplug", 00:23:39.430 "params": { 00:23:39.430 "period_us": 100000, 00:23:39.430 "enable": false 00:23:39.430 } 00:23:39.430 }, 00:23:39.430 { 00:23:39.430 "method": "bdev_enable_histogram", 00:23:39.430 "params": { 00:23:39.430 "name": "nvme0n1", 00:23:39.430 "enable": true 00:23:39.430 } 00:23:39.430 }, 00:23:39.430 { 00:23:39.430 "method": "bdev_wait_for_examine" 00:23:39.430 } 00:23:39.430 ] 00:23:39.430 }, 00:23:39.430 { 00:23:39.430 "subsystem": "nbd", 00:23:39.430 "config": [] 00:23:39.430 } 00:23:39.430 ] 00:23:39.430 }' 00:23:39.430 [2024-07-25 12:36:12.590009] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:39.430 [2024-07-25 12:36:12.590061] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474995 ] 00:23:39.430 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.430 [2024-07-25 12:36:12.669734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.430 [2024-07-25 12:36:12.748500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.691 [2024-07-25 12:36:12.892710] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.261 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.261 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:40.261 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.261 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:23:40.521 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.521 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:40.521 Running I/O for 1 seconds... 00:23:41.462 00:23:41.462 Latency(us) 00:23:41.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.462 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:41.462 Verification LBA range: start 0x0 length 0x2000 00:23:41.462 nvme0n1 : 1.04 3507.43 13.70 0.00 0.00 35953.56 7965.14 37305.11 00:23:41.462 =================================================================================================================== 00:23:41.462 Total : 3507.43 13.70 0.00 0.00 35953.56 7965.14 37305.11 00:23:41.462 0 00:23:41.462 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:23:41.462 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:23:41.462 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:41.462 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:41.462 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:41.462 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:41.462 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:41.462 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:41.462 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:41.462 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:41.462 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:41.462 nvmf_trace.0 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 474995 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 474995 ']' 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 474995 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 474995 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 474995' 00:23:41.723 killing process with pid 474995 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 474995 00:23:41.723 Received shutdown signal, test time was about 1.000000 seconds 00:23:41.723 00:23:41.723 Latency(us) 00:23:41.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.723 =================================================================================================================== 00:23:41.723 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.723 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 474995 00:23:41.723 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:41.723 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:41.723 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:41.984 rmmod nvme_tcp 00:23:41.984 rmmod nvme_fabrics 00:23:41.984 rmmod nvme_keyring 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 474935 ']' 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 474935 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 474935 ']' 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 474935 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 474935 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 474935' 00:23:41.984 killing process with pid 474935 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 474935 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 474935 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.984 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.RKLlxSiFGD /tmp/tmp.mcSdMJIagv /tmp/tmp.gFI8vgMpPO 00:23:44.531 00:23:44.531 real 1m33.258s 00:23:44.531 user 2m26.513s 00:23:44.531 sys 0m28.684s 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.531 ************************************ 00:23:44.531 END TEST nvmf_tls 00:23:44.531 ************************************ 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:44.531 ************************************ 00:23:44.531 START TEST nvmf_fips 00:23:44.531 ************************************ 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:44.531 * Looking for test storage... 00:23:44.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:44.531 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:44.532 Error setting digest 00:23:44.532 004258073E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:44.532 004258073E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:44.532 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.678 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:52.679 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:52.679 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:52.679 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:52.679 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.679 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.679 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.679 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.679 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.940 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.940 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:23:52.941 00:23:52.941 --- 10.0.0.2 ping statistics --- 00:23:52.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.941 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:23:52.941 00:23:52.941 --- 10.0.0.1 ping statistics --- 00:23:52.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.941 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=479958 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 479958 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 479958 ']' 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.941 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.203 [2024-07-25 12:36:26.367976] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:53.203 [2024-07-25 12:36:26.368051] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.203 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.203 [2024-07-25 12:36:26.457997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.203 [2024-07-25 12:36:26.565732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.203 [2024-07-25 12:36:26.565789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.203 [2024-07-25 12:36:26.565800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.203 [2024-07-25 12:36:26.565809] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.203 [2024-07-25 12:36:26.565817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.203 [2024-07-25 12:36:26.565846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:54.146 [2024-07-25 12:36:27.446212] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.146 [2024-07-25 12:36:27.462188] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.146 [2024-07-25 12:36:27.462454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.146 [2024-07-25 12:36:27.493262] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:54.146 malloc0 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=480132 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 480132 /var/tmp/bdevperf.sock 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 480132 ']' 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.146 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:54.407 [2024-07-25 12:36:27.603458] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:23:54.407 [2024-07-25 12:36:27.603533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480132 ] 00:23:54.407 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.407 [2024-07-25 12:36:27.738664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.668 [2024-07-25 12:36:27.899837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.241 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.241 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:55.241 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:55.503 [2024-07-25 12:36:28.661577] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.503 [2024-07-25 12:36:28.661790] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:55.503 TLSTESTn1 00:23:55.503 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:55.503 Running I/O for 10 seconds... 00:24:07.737 00:24:07.737 Latency(us) 00:24:07.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.737 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:07.737 Verification LBA range: start 0x0 length 0x2000 00:24:07.737 TLSTESTn1 : 10.06 1888.36 7.38 0.00 0.00 67532.26 15022.87 58478.28 00:24:07.737 =================================================================================================================== 00:24:07.737 Total : 1888.36 7.38 0.00 0.00 67532.26 15022.87 58478.28 00:24:07.737 0 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:07.737 nvmf_trace.0 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 480132 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 480132 ']' 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 480132 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 480132 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 480132' 00:24:07.737 killing process with pid 480132 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 480132 00:24:07.737 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.737 00:24:07.737 Latency(us) 00:24:07.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.737 =================================================================================================================== 00:24:07.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.737 [2024-07-25 12:36:39.186357] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 480132 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:07.737 rmmod nvme_tcp 00:24:07.737 rmmod nvme_fabrics 00:24:07.737 rmmod nvme_keyring 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:07.737 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 479958 ']' 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 479958 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 479958 ']' 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 479958 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 479958 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 479958' 00:24:07.738 killing process with pid 479958 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 479958 00:24:07.738 [2024-07-25 12:36:39.611239] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 479958 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.679 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:08.679 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:08.679 00:24:08.679 real 0m24.362s 00:24:08.679 user 0m25.598s 00:24:08.679 sys 0m10.134s 00:24:08.679 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:08.679 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:08.679 ************************************ 00:24:08.679 END TEST nvmf_fips 00:24:08.679 ************************************ 00:24:08.679 12:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:24:08.679 12:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:24:08.679 12:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:24:08.679 12:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:24:08.679 12:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:24:08.679 12:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:24:08.679 12:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.821 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.821 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:24:16.821 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:16.821 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:16.822 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:16.822 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:16.822 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:16.822 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.822 ************************************ 00:24:16.822 START TEST nvmf_perf_adq 00:24:16.822 ************************************ 00:24:16.822 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:16.822 * Looking for test storage... 00:24:16.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.822 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:16.823 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:24.970 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.970 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.970 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.970 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.970 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.970 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.970 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:24.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:24.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:24.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:24.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:24:24.971 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:24:26.353 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:24:28.362 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.642 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:33.643 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:33.643 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:33.643 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:33.643 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.643 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:33.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:24:33.643 00:24:33.643 --- 10.0.0.2 ping statistics --- 00:24:33.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.643 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:24:33.643 00:24:33.643 --- 10.0.0.1 ping statistics --- 00:24:33.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.643 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:33.643 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.903 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=492045 00:24:33.903 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 492045 00:24:33.903 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:33.903 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 492045 ']' 00:24:33.903 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.903 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.903 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.903 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.903 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.903 [2024-07-25 12:37:07.120220] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:24:33.903 [2024-07-25 12:37:07.120281] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.903 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.903 [2024-07-25 12:37:07.213917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.903 [2024-07-25 12:37:07.307957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.903 [2024-07-25 12:37:07.308016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.903 [2024-07-25 12:37:07.308028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.903 [2024-07-25 12:37:07.308034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.903 [2024-07-25 12:37:07.308040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.903 [2024-07-25 12:37:07.308176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.903 [2024-07-25 12:37:07.308317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.903 [2024-07-25 12:37:07.308627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.903 [2024-07-25 12:37:07.308648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.843 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.843 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:24:34.843 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.843 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:34.843 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.843 [2024-07-25 12:37:08.203245] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.843 Malloc1 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:34.843 [2024-07-25 12:37:08.256949] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=492159 00:24:34.843 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:24:35.104 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:35.104 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.016 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:24:37.016 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.016 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.016 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.016 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:24:37.016 "tick_rate": 2600000000, 00:24:37.016 "poll_groups": [ 00:24:37.016 { 00:24:37.016 "name": "nvmf_tgt_poll_group_000", 00:24:37.016 "admin_qpairs": 1, 00:24:37.016 "io_qpairs": 1, 00:24:37.016 "current_admin_qpairs": 1, 00:24:37.016 "current_io_qpairs": 1, 00:24:37.016 "pending_bdev_io": 0, 00:24:37.016 "completed_nvme_io": 15190, 00:24:37.016 "transports": [ 00:24:37.016 { 00:24:37.016 "trtype": "TCP" 00:24:37.016 } 00:24:37.016 ] 00:24:37.016 }, 00:24:37.016 { 00:24:37.016 "name": "nvmf_tgt_poll_group_001", 00:24:37.016 "admin_qpairs": 0, 00:24:37.016 "io_qpairs": 1, 00:24:37.016 "current_admin_qpairs": 0, 00:24:37.016 "current_io_qpairs": 1, 00:24:37.016 "pending_bdev_io": 0, 00:24:37.016 "completed_nvme_io": 7416, 00:24:37.016 "transports": [ 00:24:37.016 { 00:24:37.016 "trtype": "TCP" 00:24:37.016 } 00:24:37.016 ] 00:24:37.016 }, 00:24:37.016 { 00:24:37.016 "name": "nvmf_tgt_poll_group_002", 00:24:37.016 "admin_qpairs": 0, 00:24:37.016 "io_qpairs": 1, 00:24:37.016 "current_admin_qpairs": 0, 00:24:37.016 "current_io_qpairs": 1, 00:24:37.016 "pending_bdev_io": 0, 00:24:37.016 "completed_nvme_io": 7450, 00:24:37.016 "transports": [ 00:24:37.016 { 00:24:37.016 "trtype": "TCP" 00:24:37.016 } 00:24:37.016 ] 00:24:37.016 }, 00:24:37.016 { 00:24:37.016 "name": "nvmf_tgt_poll_group_003", 00:24:37.016 "admin_qpairs": 0, 00:24:37.016 "io_qpairs": 1, 00:24:37.016 "current_admin_qpairs": 0, 00:24:37.016 "current_io_qpairs": 1, 00:24:37.016 "pending_bdev_io": 0, 00:24:37.016 "completed_nvme_io": 15826, 00:24:37.016 "transports": [ 00:24:37.016 { 00:24:37.016 "trtype": "TCP" 00:24:37.016 } 00:24:37.016 ] 00:24:37.016 } 00:24:37.016 ] 00:24:37.016 }' 00:24:37.016 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:37.016 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:24:37.016 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:24:37.016 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:24:37.016 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 492159 00:24:45.142 Initializing NVMe Controllers 00:24:45.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:45.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:45.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:45.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:45.142 Initialization complete. Launching workers. 00:24:45.142 ======================================================== 00:24:45.142 Latency(us) 00:24:45.142 Device Information : IOPS MiB/s Average min max 00:24:45.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9080.50 35.47 7047.42 3124.16 11561.77 00:24:45.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4065.10 15.88 15750.28 5431.64 28815.25 00:24:45.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4065.10 15.88 15744.97 4677.42 30257.69 00:24:45.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8912.60 34.81 7180.22 2092.71 11800.03 00:24:45.142 ======================================================== 00:24:45.142 Total : 26123.29 102.04 9800.44 2092.71 30257.69 00:24:45.142 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.142 rmmod nvme_tcp 00:24:45.142 rmmod nvme_fabrics 00:24:45.142 rmmod nvme_keyring 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 492045 ']' 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 492045 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 492045 ']' 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 492045 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 492045 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 492045' 00:24:45.142 killing process with pid 492045 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 492045 00:24:45.142 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 492045 00:24:45.402 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.402 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:45.402 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:45.402 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.402 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:45.402 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.402 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.402 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.311 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:47.572 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:24:47.572 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:24:48.954 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:24:50.864 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:56.151 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.151 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:56.152 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:56.152 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:56.152 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:56.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:24:56.152 00:24:56.152 --- 10.0.0.2 ping statistics --- 00:24:56.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.152 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:24:56.152 00:24:56.152 --- 10.0.0.1 ping statistics --- 00:24:56.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.152 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:56.152 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:56.413 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:56.413 net.core.busy_poll = 1 00:24:56.413 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:56.413 net.core.busy_read = 1 00:24:56.413 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:56.413 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:56.413 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:56.413 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:56.413 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:56.413 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:56.413 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:56.413 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:56.413 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.673 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=496134 00:24:56.673 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 496134 00:24:56.673 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 496134 ']' 00:24:56.673 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.673 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.673 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.673 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.673 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.673 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:56.673 [2024-07-25 12:37:29.934823] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:24:56.673 [2024-07-25 12:37:29.934958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.673 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.673 [2024-07-25 12:37:30.092463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:56.933 [2024-07-25 12:37:30.186413] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.933 [2024-07-25 12:37:30.186473] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.933 [2024-07-25 12:37:30.186481] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.933 [2024-07-25 12:37:30.186488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.933 [2024-07-25 12:37:30.186494] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.933 [2024-07-25 12:37:30.186640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.933 [2024-07-25 12:37:30.186703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.933 [2024-07-25 12:37:30.186831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.933 [2024-07-25 12:37:30.186831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.193 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.193 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:24:57.193 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:57.193 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:57.193 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:57.193 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.193 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:24:57.193 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.194 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:57.454 [2024-07-25 12:37:30.661437] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:57.454 Malloc1 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:57.454 [2024-07-25 12:37:30.714815] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=496183 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:24:57.454 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:57.454 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.366 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:24:59.366 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.366 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:59.366 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.366 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:24:59.366 "tick_rate": 2600000000, 00:24:59.366 "poll_groups": [ 00:24:59.366 { 00:24:59.366 "name": "nvmf_tgt_poll_group_000", 00:24:59.366 "admin_qpairs": 1, 00:24:59.366 "io_qpairs": 3, 00:24:59.366 "current_admin_qpairs": 1, 00:24:59.366 "current_io_qpairs": 3, 00:24:59.366 "pending_bdev_io": 0, 00:24:59.366 "completed_nvme_io": 19688, 00:24:59.366 "transports": [ 00:24:59.366 { 00:24:59.366 "trtype": "TCP" 00:24:59.366 } 00:24:59.366 ] 00:24:59.366 }, 00:24:59.366 { 00:24:59.366 "name": "nvmf_tgt_poll_group_001", 00:24:59.366 "admin_qpairs": 0, 00:24:59.366 "io_qpairs": 1, 00:24:59.366 "current_admin_qpairs": 0, 00:24:59.366 "current_io_qpairs": 1, 00:24:59.366 "pending_bdev_io": 0, 00:24:59.366 "completed_nvme_io": 9665, 00:24:59.366 "transports": [ 00:24:59.366 { 00:24:59.366 "trtype": "TCP" 00:24:59.366 } 00:24:59.366 ] 00:24:59.366 }, 00:24:59.366 { 00:24:59.366 "name": "nvmf_tgt_poll_group_002", 00:24:59.366 "admin_qpairs": 0, 00:24:59.366 "io_qpairs": 0, 00:24:59.366 "current_admin_qpairs": 0, 00:24:59.366 "current_io_qpairs": 0, 00:24:59.366 "pending_bdev_io": 0, 00:24:59.366 "completed_nvme_io": 0, 00:24:59.366 "transports": [ 00:24:59.366 { 00:24:59.366 "trtype": "TCP" 00:24:59.366 } 00:24:59.366 ] 00:24:59.366 }, 00:24:59.366 { 00:24:59.366 "name": "nvmf_tgt_poll_group_003", 00:24:59.366 "admin_qpairs": 0, 00:24:59.366 "io_qpairs": 0, 00:24:59.366 "current_admin_qpairs": 0, 00:24:59.366 "current_io_qpairs": 0, 00:24:59.366 "pending_bdev_io": 0, 00:24:59.366 "completed_nvme_io": 0, 00:24:59.366 "transports": [ 00:24:59.366 { 00:24:59.366 "trtype": "TCP" 00:24:59.366 } 00:24:59.366 ] 00:24:59.366 } 00:24:59.366 ] 00:24:59.366 }' 00:24:59.366 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:59.366 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:24:59.627 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:24:59.627 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:24:59.627 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 496183 00:25:07.894 Initializing NVMe Controllers 00:25:07.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:07.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:07.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:07.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:07.894 Initialization complete. Launching workers. 00:25:07.894 ======================================================== 00:25:07.894 Latency(us) 00:25:07.894 Device Information : IOPS MiB/s Average min max 00:25:07.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3407.40 13.31 18788.00 2923.66 67867.32 00:25:07.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3956.30 15.45 16180.52 2217.87 65269.13 00:25:07.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5390.20 21.06 11880.42 3566.56 15864.81 00:25:07.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3942.00 15.40 16235.34 2778.87 65019.80 00:25:07.895 ======================================================== 00:25:07.895 Total : 16695.89 65.22 15337.35 2217.87 67867.32 00:25:07.895 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:07.895 rmmod nvme_tcp 00:25:07.895 rmmod nvme_fabrics 00:25:07.895 rmmod nvme_keyring 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 496134 ']' 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 496134 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 496134 ']' 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 496134 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:07.895 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 496134 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 496134' 00:25:07.895 killing process with pid 496134 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 496134 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 496134 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.895 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:25:11.193 00:25:11.193 real 0m54.268s 00:25:11.193 user 2m49.352s 00:25:11.193 sys 0m11.362s 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:11.193 ************************************ 00:25:11.193 END TEST nvmf_perf_adq 00:25:11.193 ************************************ 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:11.193 ************************************ 00:25:11.193 START TEST nvmf_shutdown 00:25:11.193 ************************************ 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:11.193 * Looking for test storage... 00:25:11.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:11.193 ************************************ 00:25:11.193 START TEST nvmf_shutdown_tc1 00:25:11.193 ************************************ 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.193 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:19.328 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:19.328 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:19.328 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.328 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:19.329 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.329 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.589 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.589 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.589 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:19.589 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.589 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.589 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.589 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:19.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:25:19.589 00:25:19.589 --- 10.0.0.2 ping statistics --- 00:25:19.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.589 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:25:19.589 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:25:19.849 00:25:19.849 --- 10.0.0.1 ping statistics --- 00:25:19.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.849 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:25:19.849 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.849 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:25:19.849 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:19.849 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.849 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:19.849 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:19.849 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=502609 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 502609 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 502609 ']' 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:19.850 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:19.850 [2024-07-25 12:37:53.167000] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:25:19.850 [2024-07-25 12:37:53.167133] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.850 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.110 [2024-07-25 12:37:53.320139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.110 [2024-07-25 12:37:53.429735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.110 [2024-07-25 12:37:53.429797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.110 [2024-07-25 12:37:53.429808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.110 [2024-07-25 12:37:53.429817] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.110 [2024-07-25 12:37:53.429825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.110 [2024-07-25 12:37:53.429993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.110 [2024-07-25 12:37:53.430152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.110 [2024-07-25 12:37:53.430307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:20.110 [2024-07-25 12:37:53.430309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:20.682 [2024-07-25 12:37:54.052597] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.682 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.943 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:20.943 Malloc1 00:25:20.943 [2024-07-25 12:37:54.175628] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.943 Malloc2 00:25:20.943 Malloc3 00:25:20.943 Malloc4 00:25:20.943 Malloc5 00:25:21.204 Malloc6 00:25:21.204 Malloc7 00:25:21.204 Malloc8 00:25:21.204 Malloc9 00:25:21.204 Malloc10 00:25:21.204 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.204 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:21.204 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.204 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=502958 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 502958 /var/tmp/bdevperf.sock 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 502958 ']' 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.466 { 00:25:21.466 "params": { 00:25:21.466 "name": "Nvme$subsystem", 00:25:21.466 "trtype": "$TEST_TRANSPORT", 00:25:21.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.466 "adrfam": "ipv4", 00:25:21.466 "trsvcid": "$NVMF_PORT", 00:25:21.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.466 "hdgst": ${hdgst:-false}, 00:25:21.466 "ddgst": ${ddgst:-false} 00:25:21.466 }, 00:25:21.466 "method": "bdev_nvme_attach_controller" 00:25:21.466 } 00:25:21.466 EOF 00:25:21.466 )") 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.466 { 00:25:21.466 "params": { 00:25:21.466 "name": "Nvme$subsystem", 00:25:21.466 "trtype": "$TEST_TRANSPORT", 00:25:21.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.466 "adrfam": "ipv4", 00:25:21.466 "trsvcid": "$NVMF_PORT", 00:25:21.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.466 "hdgst": ${hdgst:-false}, 00:25:21.466 "ddgst": ${ddgst:-false} 00:25:21.466 }, 00:25:21.466 "method": "bdev_nvme_attach_controller" 00:25:21.466 } 00:25:21.466 EOF 00:25:21.466 )") 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.466 { 00:25:21.466 "params": { 00:25:21.466 "name": "Nvme$subsystem", 00:25:21.466 "trtype": "$TEST_TRANSPORT", 00:25:21.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.466 "adrfam": "ipv4", 00:25:21.466 "trsvcid": "$NVMF_PORT", 00:25:21.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.466 "hdgst": ${hdgst:-false}, 00:25:21.466 "ddgst": ${ddgst:-false} 00:25:21.466 }, 00:25:21.466 "method": "bdev_nvme_attach_controller" 00:25:21.466 } 00:25:21.466 EOF 00:25:21.466 )") 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.466 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.466 { 00:25:21.466 "params": { 00:25:21.466 "name": "Nvme$subsystem", 00:25:21.466 "trtype": "$TEST_TRANSPORT", 00:25:21.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.466 "adrfam": "ipv4", 00:25:21.466 "trsvcid": "$NVMF_PORT", 00:25:21.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.466 "hdgst": ${hdgst:-false}, 00:25:21.466 "ddgst": ${ddgst:-false} 00:25:21.466 }, 00:25:21.466 "method": "bdev_nvme_attach_controller" 00:25:21.466 } 00:25:21.467 EOF 00:25:21.467 )") 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.467 { 00:25:21.467 "params": { 00:25:21.467 "name": "Nvme$subsystem", 00:25:21.467 "trtype": "$TEST_TRANSPORT", 00:25:21.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.467 "adrfam": "ipv4", 00:25:21.467 "trsvcid": "$NVMF_PORT", 00:25:21.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.467 "hdgst": ${hdgst:-false}, 00:25:21.467 "ddgst": ${ddgst:-false} 00:25:21.467 }, 00:25:21.467 "method": "bdev_nvme_attach_controller" 00:25:21.467 } 00:25:21.467 EOF 00:25:21.467 )") 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.467 { 00:25:21.467 "params": { 00:25:21.467 "name": "Nvme$subsystem", 00:25:21.467 "trtype": "$TEST_TRANSPORT", 00:25:21.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.467 "adrfam": "ipv4", 00:25:21.467 "trsvcid": "$NVMF_PORT", 00:25:21.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.467 "hdgst": ${hdgst:-false}, 00:25:21.467 "ddgst": ${ddgst:-false} 00:25:21.467 }, 00:25:21.467 "method": "bdev_nvme_attach_controller" 00:25:21.467 } 00:25:21.467 EOF 00:25:21.467 )") 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:21.467 [2024-07-25 12:37:54.696704] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:25:21.467 [2024-07-25 12:37:54.696773] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.467 { 00:25:21.467 "params": { 00:25:21.467 "name": "Nvme$subsystem", 00:25:21.467 "trtype": "$TEST_TRANSPORT", 00:25:21.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.467 "adrfam": "ipv4", 00:25:21.467 "trsvcid": "$NVMF_PORT", 00:25:21.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.467 "hdgst": ${hdgst:-false}, 00:25:21.467 "ddgst": ${ddgst:-false} 00:25:21.467 }, 00:25:21.467 "method": "bdev_nvme_attach_controller" 00:25:21.467 } 00:25:21.467 EOF 00:25:21.467 )") 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.467 { 00:25:21.467 "params": { 00:25:21.467 "name": "Nvme$subsystem", 00:25:21.467 "trtype": "$TEST_TRANSPORT", 00:25:21.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.467 "adrfam": "ipv4", 00:25:21.467 "trsvcid": "$NVMF_PORT", 00:25:21.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.467 "hdgst": ${hdgst:-false}, 00:25:21.467 "ddgst": ${ddgst:-false} 00:25:21.467 }, 00:25:21.467 "method": "bdev_nvme_attach_controller" 00:25:21.467 } 00:25:21.467 EOF 00:25:21.467 )") 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.467 { 00:25:21.467 "params": { 00:25:21.467 "name": "Nvme$subsystem", 00:25:21.467 "trtype": "$TEST_TRANSPORT", 00:25:21.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.467 "adrfam": "ipv4", 00:25:21.467 "trsvcid": "$NVMF_PORT", 00:25:21.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.467 "hdgst": ${hdgst:-false}, 00:25:21.467 "ddgst": ${ddgst:-false} 00:25:21.467 }, 00:25:21.467 "method": "bdev_nvme_attach_controller" 00:25:21.467 } 00:25:21.467 EOF 00:25:21.467 )") 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.467 { 00:25:21.467 "params": { 00:25:21.467 "name": "Nvme$subsystem", 00:25:21.467 "trtype": "$TEST_TRANSPORT", 00:25:21.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.467 "adrfam": "ipv4", 00:25:21.467 "trsvcid": "$NVMF_PORT", 00:25:21.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.467 "hdgst": ${hdgst:-false}, 00:25:21.467 "ddgst": ${ddgst:-false} 00:25:21.467 }, 00:25:21.467 "method": "bdev_nvme_attach_controller" 00:25:21.467 } 00:25:21.467 EOF 00:25:21.467 )") 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:21.467 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:25:21.467 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:21.467 "params": { 00:25:21.467 "name": "Nvme1", 00:25:21.467 "trtype": "tcp", 00:25:21.467 "traddr": "10.0.0.2", 00:25:21.467 "adrfam": "ipv4", 00:25:21.467 "trsvcid": "4420", 00:25:21.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:21.467 "hdgst": false, 00:25:21.467 "ddgst": false 00:25:21.467 }, 00:25:21.467 "method": "bdev_nvme_attach_controller" 00:25:21.467 },{ 00:25:21.467 "params": { 00:25:21.467 "name": "Nvme2", 00:25:21.467 "trtype": "tcp", 00:25:21.467 "traddr": "10.0.0.2", 00:25:21.467 "adrfam": "ipv4", 00:25:21.467 "trsvcid": "4420", 00:25:21.467 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:21.467 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:21.467 "hdgst": false, 00:25:21.467 "ddgst": false 00:25:21.467 }, 00:25:21.467 "method": "bdev_nvme_attach_controller" 00:25:21.467 },{ 00:25:21.467 "params": { 00:25:21.467 "name": "Nvme3", 00:25:21.467 "trtype": "tcp", 00:25:21.467 "traddr": "10.0.0.2", 00:25:21.467 "adrfam": "ipv4", 00:25:21.467 "trsvcid": "4420", 00:25:21.467 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:21.467 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:21.467 "hdgst": false, 00:25:21.467 "ddgst": false 00:25:21.467 }, 00:25:21.467 "method": "bdev_nvme_attach_controller" 00:25:21.467 },{ 00:25:21.467 "params": { 00:25:21.467 "name": "Nvme4", 00:25:21.467 "trtype": "tcp", 00:25:21.467 "traddr": "10.0.0.2", 00:25:21.467 "adrfam": "ipv4", 00:25:21.467 "trsvcid": "4420", 00:25:21.467 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:21.467 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:21.467 "hdgst": false, 00:25:21.467 "ddgst": false 00:25:21.467 }, 00:25:21.467 "method": "bdev_nvme_attach_controller" 00:25:21.467 },{ 00:25:21.467 "params": { 00:25:21.467 "name": "Nvme5", 00:25:21.467 "trtype": "tcp", 00:25:21.467 "traddr": "10.0.0.2", 00:25:21.467 "adrfam": "ipv4", 00:25:21.467 "trsvcid": "4420", 00:25:21.467 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:21.467 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:21.467 "hdgst": false, 00:25:21.467 "ddgst": false 00:25:21.467 }, 00:25:21.467 "method": "bdev_nvme_attach_controller" 00:25:21.468 },{ 00:25:21.468 "params": { 00:25:21.468 "name": "Nvme6", 00:25:21.468 "trtype": "tcp", 00:25:21.468 "traddr": "10.0.0.2", 00:25:21.468 "adrfam": "ipv4", 00:25:21.468 "trsvcid": "4420", 00:25:21.468 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:21.468 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:21.468 "hdgst": false, 00:25:21.468 "ddgst": false 00:25:21.468 }, 00:25:21.468 "method": "bdev_nvme_attach_controller" 00:25:21.468 },{ 00:25:21.468 "params": { 00:25:21.468 "name": "Nvme7", 00:25:21.468 "trtype": "tcp", 00:25:21.468 "traddr": "10.0.0.2", 00:25:21.468 "adrfam": "ipv4", 00:25:21.468 "trsvcid": "4420", 00:25:21.468 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:21.468 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:21.468 "hdgst": false, 00:25:21.468 "ddgst": false 00:25:21.468 }, 00:25:21.468 "method": "bdev_nvme_attach_controller" 00:25:21.468 },{ 00:25:21.468 "params": { 00:25:21.468 "name": "Nvme8", 00:25:21.468 "trtype": "tcp", 00:25:21.468 "traddr": "10.0.0.2", 00:25:21.468 "adrfam": "ipv4", 00:25:21.468 "trsvcid": "4420", 00:25:21.468 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:21.468 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:21.468 "hdgst": false, 00:25:21.468 "ddgst": false 00:25:21.468 }, 00:25:21.468 "method": "bdev_nvme_attach_controller" 00:25:21.468 },{ 00:25:21.468 "params": { 00:25:21.468 "name": "Nvme9", 00:25:21.468 "trtype": "tcp", 00:25:21.468 "traddr": "10.0.0.2", 00:25:21.468 "adrfam": "ipv4", 00:25:21.468 "trsvcid": "4420", 00:25:21.468 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:21.468 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:21.468 "hdgst": false, 00:25:21.468 "ddgst": false 00:25:21.468 }, 00:25:21.468 "method": "bdev_nvme_attach_controller" 00:25:21.468 },{ 00:25:21.468 "params": { 00:25:21.468 "name": "Nvme10", 00:25:21.468 "trtype": "tcp", 00:25:21.468 "traddr": "10.0.0.2", 00:25:21.468 "adrfam": "ipv4", 00:25:21.468 "trsvcid": "4420", 00:25:21.468 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:21.468 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:21.468 "hdgst": false, 00:25:21.468 "ddgst": false 00:25:21.468 }, 00:25:21.468 "method": "bdev_nvme_attach_controller" 00:25:21.468 }' 00:25:21.468 [2024-07-25 12:37:54.782085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.468 [2024-07-25 12:37:54.875795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.854 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.854 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:25:22.854 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:22.854 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.854 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:22.854 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.854 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 502958 00:25:22.854 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:22.854 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:25:23.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 502958 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:23.796 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 502609 00:25:23.796 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:23.796 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:23.796 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:25:23.796 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:25:23.796 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.796 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.796 { 00:25:23.796 "params": { 00:25:23.796 "name": "Nvme$subsystem", 00:25:23.796 "trtype": "$TEST_TRANSPORT", 00:25:23.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.796 "adrfam": "ipv4", 00:25:23.796 "trsvcid": "$NVMF_PORT", 00:25:23.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.796 "hdgst": ${hdgst:-false}, 00:25:23.796 "ddgst": ${ddgst:-false} 00:25:23.796 }, 00:25:23.796 "method": "bdev_nvme_attach_controller" 00:25:23.796 } 00:25:23.796 EOF 00:25:23.796 )") 00:25:23.796 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:23.796 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.796 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.796 { 00:25:23.796 "params": { 00:25:23.796 "name": "Nvme$subsystem", 00:25:23.796 "trtype": "$TEST_TRANSPORT", 00:25:23.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.797 "adrfam": "ipv4", 00:25:23.797 "trsvcid": "$NVMF_PORT", 00:25:23.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.797 "hdgst": ${hdgst:-false}, 00:25:23.797 "ddgst": ${ddgst:-false} 00:25:23.797 }, 00:25:23.797 "method": "bdev_nvme_attach_controller" 00:25:23.797 } 00:25:23.797 EOF 00:25:23.797 )") 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.797 { 00:25:23.797 "params": { 00:25:23.797 "name": "Nvme$subsystem", 00:25:23.797 "trtype": "$TEST_TRANSPORT", 00:25:23.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.797 "adrfam": "ipv4", 00:25:23.797 "trsvcid": "$NVMF_PORT", 00:25:23.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.797 "hdgst": ${hdgst:-false}, 00:25:23.797 "ddgst": ${ddgst:-false} 00:25:23.797 }, 00:25:23.797 "method": "bdev_nvme_attach_controller" 00:25:23.797 } 00:25:23.797 EOF 00:25:23.797 )") 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.797 { 00:25:23.797 "params": { 00:25:23.797 "name": "Nvme$subsystem", 00:25:23.797 "trtype": "$TEST_TRANSPORT", 00:25:23.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.797 "adrfam": "ipv4", 00:25:23.797 "trsvcid": "$NVMF_PORT", 00:25:23.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.797 "hdgst": ${hdgst:-false}, 00:25:23.797 "ddgst": ${ddgst:-false} 00:25:23.797 }, 00:25:23.797 "method": "bdev_nvme_attach_controller" 00:25:23.797 } 00:25:23.797 EOF 00:25:23.797 )") 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.797 { 00:25:23.797 "params": { 00:25:23.797 "name": "Nvme$subsystem", 00:25:23.797 "trtype": "$TEST_TRANSPORT", 00:25:23.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.797 "adrfam": "ipv4", 00:25:23.797 "trsvcid": "$NVMF_PORT", 00:25:23.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.797 "hdgst": ${hdgst:-false}, 00:25:23.797 "ddgst": ${ddgst:-false} 00:25:23.797 }, 00:25:23.797 "method": "bdev_nvme_attach_controller" 00:25:23.797 } 00:25:23.797 EOF 00:25:23.797 )") 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.797 { 00:25:23.797 "params": { 00:25:23.797 "name": "Nvme$subsystem", 00:25:23.797 "trtype": "$TEST_TRANSPORT", 00:25:23.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.797 "adrfam": "ipv4", 00:25:23.797 "trsvcid": "$NVMF_PORT", 00:25:23.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.797 "hdgst": ${hdgst:-false}, 00:25:23.797 "ddgst": ${ddgst:-false} 00:25:23.797 }, 00:25:23.797 "method": "bdev_nvme_attach_controller" 00:25:23.797 } 00:25:23.797 EOF 00:25:23.797 )") 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.797 { 00:25:23.797 "params": { 00:25:23.797 "name": "Nvme$subsystem", 00:25:23.797 "trtype": "$TEST_TRANSPORT", 00:25:23.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.797 "adrfam": "ipv4", 00:25:23.797 "trsvcid": "$NVMF_PORT", 00:25:23.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.797 "hdgst": ${hdgst:-false}, 00:25:23.797 "ddgst": ${ddgst:-false} 00:25:23.797 }, 00:25:23.797 "method": "bdev_nvme_attach_controller" 00:25:23.797 } 00:25:23.797 EOF 00:25:23.797 )") 00:25:23.797 [2024-07-25 12:37:57.195381] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:25:23.797 [2024-07-25 12:37:57.195450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid503296 ] 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.797 { 00:25:23.797 "params": { 00:25:23.797 "name": "Nvme$subsystem", 00:25:23.797 "trtype": "$TEST_TRANSPORT", 00:25:23.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.797 "adrfam": "ipv4", 00:25:23.797 "trsvcid": "$NVMF_PORT", 00:25:23.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.797 "hdgst": ${hdgst:-false}, 00:25:23.797 "ddgst": ${ddgst:-false} 00:25:23.797 }, 00:25:23.797 "method": "bdev_nvme_attach_controller" 00:25:23.797 } 00:25:23.797 EOF 00:25:23.797 )") 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.797 { 00:25:23.797 "params": { 00:25:23.797 "name": "Nvme$subsystem", 00:25:23.797 "trtype": "$TEST_TRANSPORT", 00:25:23.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.797 "adrfam": "ipv4", 00:25:23.797 "trsvcid": "$NVMF_PORT", 00:25:23.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.797 "hdgst": ${hdgst:-false}, 00:25:23.797 "ddgst": ${ddgst:-false} 00:25:23.797 }, 00:25:23.797 "method": "bdev_nvme_attach_controller" 00:25:23.797 } 00:25:23.797 EOF 00:25:23.797 )") 00:25:23.797 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:24.059 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:24.059 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:24.059 { 00:25:24.059 "params": { 00:25:24.059 "name": "Nvme$subsystem", 00:25:24.059 "trtype": "$TEST_TRANSPORT", 00:25:24.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.059 "adrfam": "ipv4", 00:25:24.059 "trsvcid": "$NVMF_PORT", 00:25:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.059 "hdgst": ${hdgst:-false}, 00:25:24.059 "ddgst": ${ddgst:-false} 00:25:24.059 }, 00:25:24.059 "method": "bdev_nvme_attach_controller" 00:25:24.059 } 00:25:24.059 EOF 00:25:24.059 )") 00:25:24.059 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:24.059 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:25:24.059 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.059 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:25:24.059 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:24.059 "params": { 00:25:24.059 "name": "Nvme1", 00:25:24.059 "trtype": "tcp", 00:25:24.059 "traddr": "10.0.0.2", 00:25:24.059 "adrfam": "ipv4", 00:25:24.059 "trsvcid": "4420", 00:25:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.059 "hdgst": false, 00:25:24.059 "ddgst": false 00:25:24.059 }, 00:25:24.059 "method": "bdev_nvme_attach_controller" 00:25:24.059 },{ 00:25:24.059 "params": { 00:25:24.059 "name": "Nvme2", 00:25:24.059 "trtype": "tcp", 00:25:24.059 "traddr": "10.0.0.2", 00:25:24.059 "adrfam": "ipv4", 00:25:24.059 "trsvcid": "4420", 00:25:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:24.059 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:24.059 "hdgst": false, 00:25:24.059 "ddgst": false 00:25:24.059 }, 00:25:24.059 "method": "bdev_nvme_attach_controller" 00:25:24.059 },{ 00:25:24.059 "params": { 00:25:24.059 "name": "Nvme3", 00:25:24.059 "trtype": "tcp", 00:25:24.059 "traddr": "10.0.0.2", 00:25:24.059 "adrfam": "ipv4", 00:25:24.059 "trsvcid": "4420", 00:25:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:24.059 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:24.059 "hdgst": false, 00:25:24.059 "ddgst": false 00:25:24.059 }, 00:25:24.059 "method": "bdev_nvme_attach_controller" 00:25:24.059 },{ 00:25:24.059 "params": { 00:25:24.059 "name": "Nvme4", 00:25:24.059 "trtype": "tcp", 00:25:24.059 "traddr": "10.0.0.2", 00:25:24.059 "adrfam": "ipv4", 00:25:24.059 "trsvcid": "4420", 00:25:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:24.059 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:24.059 "hdgst": false, 00:25:24.059 "ddgst": false 00:25:24.059 }, 00:25:24.059 "method": "bdev_nvme_attach_controller" 00:25:24.059 },{ 00:25:24.059 "params": { 00:25:24.059 "name": "Nvme5", 00:25:24.059 "trtype": "tcp", 00:25:24.059 "traddr": "10.0.0.2", 00:25:24.059 "adrfam": "ipv4", 00:25:24.059 "trsvcid": "4420", 00:25:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:24.059 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:24.059 "hdgst": false, 00:25:24.059 "ddgst": false 00:25:24.059 }, 00:25:24.059 "method": "bdev_nvme_attach_controller" 00:25:24.059 },{ 00:25:24.059 "params": { 00:25:24.059 "name": "Nvme6", 00:25:24.059 "trtype": "tcp", 00:25:24.059 "traddr": "10.0.0.2", 00:25:24.059 "adrfam": "ipv4", 00:25:24.059 "trsvcid": "4420", 00:25:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:24.059 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:24.059 "hdgst": false, 00:25:24.059 "ddgst": false 00:25:24.059 }, 00:25:24.059 "method": "bdev_nvme_attach_controller" 00:25:24.059 },{ 00:25:24.059 "params": { 00:25:24.059 "name": "Nvme7", 00:25:24.059 "trtype": "tcp", 00:25:24.059 "traddr": "10.0.0.2", 00:25:24.059 "adrfam": "ipv4", 00:25:24.059 "trsvcid": "4420", 00:25:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:24.059 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:24.059 "hdgst": false, 00:25:24.059 "ddgst": false 00:25:24.059 }, 00:25:24.059 "method": "bdev_nvme_attach_controller" 00:25:24.059 },{ 00:25:24.059 "params": { 00:25:24.059 "name": "Nvme8", 00:25:24.059 "trtype": "tcp", 00:25:24.059 "traddr": "10.0.0.2", 00:25:24.059 "adrfam": "ipv4", 00:25:24.059 "trsvcid": "4420", 00:25:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:24.059 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:24.059 "hdgst": false, 00:25:24.059 "ddgst": false 00:25:24.059 }, 00:25:24.059 "method": "bdev_nvme_attach_controller" 00:25:24.059 },{ 00:25:24.059 "params": { 00:25:24.059 "name": "Nvme9", 00:25:24.059 "trtype": "tcp", 00:25:24.059 "traddr": "10.0.0.2", 00:25:24.059 "adrfam": "ipv4", 00:25:24.059 "trsvcid": "4420", 00:25:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:24.059 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:24.059 "hdgst": false, 00:25:24.059 "ddgst": false 00:25:24.059 }, 00:25:24.059 "method": "bdev_nvme_attach_controller" 00:25:24.059 },{ 00:25:24.059 "params": { 00:25:24.059 "name": "Nvme10", 00:25:24.059 "trtype": "tcp", 00:25:24.059 "traddr": "10.0.0.2", 00:25:24.059 "adrfam": "ipv4", 00:25:24.059 "trsvcid": "4420", 00:25:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:24.059 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:24.059 "hdgst": false, 00:25:24.059 "ddgst": false 00:25:24.059 }, 00:25:24.059 "method": "bdev_nvme_attach_controller" 00:25:24.059 }' 00:25:24.059 [2024-07-25 12:37:57.283625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.059 [2024-07-25 12:37:57.377849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.444 Running I/O for 1 seconds... 00:25:26.828 00:25:26.828 Latency(us) 00:25:26.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.828 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:26.828 Verification LBA range: start 0x0 length 0x400 00:25:26.828 Nvme1n1 : 1.16 220.24 13.77 0.00 0.00 287454.52 28835.84 267790.18 00:25:26.829 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:26.829 Verification LBA range: start 0x0 length 0x400 00:25:26.829 Nvme2n1 : 1.17 218.00 13.63 0.00 0.00 286059.91 30650.68 250045.05 00:25:26.829 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:26.829 Verification LBA range: start 0x0 length 0x400 00:25:26.829 Nvme3n1 : 1.14 225.52 14.10 0.00 0.00 272203.82 25105.33 266176.98 00:25:26.829 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:26.829 Verification LBA range: start 0x0 length 0x400 00:25:26.829 Nvme4n1 : 1.14 280.08 17.51 0.00 0.00 215504.11 23895.43 219394.36 00:25:26.829 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:26.829 Verification LBA range: start 0x0 length 0x400 00:25:26.829 Nvme5n1 : 1.19 214.59 13.41 0.00 0.00 277640.66 13510.50 327478.35 00:25:26.829 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:26.829 Verification LBA range: start 0x0 length 0x400 00:25:26.829 Nvme6n1 : 1.17 218.31 13.64 0.00 0.00 267505.82 28432.54 282308.92 00:25:26.829 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:26.829 Verification LBA range: start 0x0 length 0x400 00:25:26.829 Nvme7n1 : 1.18 276.08 17.26 0.00 0.00 207524.79 6276.33 232299.91 00:25:26.829 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:26.829 Verification LBA range: start 0x0 length 0x400 00:25:26.829 Nvme8n1 : 1.15 277.90 17.37 0.00 0.00 203029.27 20467.40 243592.27 00:25:26.829 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:26.829 Verification LBA range: start 0x0 length 0x400 00:25:26.829 Nvme9n1 : 1.21 211.00 13.19 0.00 0.00 264377.21 9124.63 280695.73 00:25:26.829 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:26.829 Verification LBA range: start 0x0 length 0x400 00:25:26.829 Nvme10n1 : 1.20 218.11 13.63 0.00 0.00 249279.74 3806.13 303280.44 00:25:26.829 =================================================================================================================== 00:25:26.829 Total : 2359.85 147.49 0.00 0.00 249884.21 3806.13 327478.35 00:25:26.829 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:25:26.829 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:26.829 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:26.829 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:26.829 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:26.829 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:26.829 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:25:26.829 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:26.829 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:25:26.829 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:26.829 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:26.829 rmmod nvme_tcp 00:25:26.829 rmmod nvme_fabrics 00:25:27.089 rmmod nvme_keyring 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 502609 ']' 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 502609 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 502609 ']' 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 502609 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 502609 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 502609' 00:25:27.089 killing process with pid 502609 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 502609 00:25:27.089 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 502609 00:25:27.350 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:27.350 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:27.350 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:27.350 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:27.350 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:27.350 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.350 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.350 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.895 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:29.895 00:25:29.895 real 0m18.254s 00:25:29.895 user 0m35.144s 00:25:29.895 sys 0m7.967s 00:25:29.895 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:29.896 ************************************ 00:25:29.896 END TEST nvmf_shutdown_tc1 00:25:29.896 ************************************ 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:29.896 ************************************ 00:25:29.896 START TEST nvmf_shutdown_tc2 00:25:29.896 ************************************ 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:29.896 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:29.896 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:29.896 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:29.896 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:29.896 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.897 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:29.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:25:29.897 00:25:29.897 --- 10.0.0.2 ping statistics --- 00:25:29.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.897 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:25:29.897 00:25:29.897 --- 10.0.0.1 ping statistics --- 00:25:29.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.897 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=504322 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 504322 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 504322 ']' 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:29.897 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:29.897 [2024-07-25 12:38:03.313117] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:25:29.897 [2024-07-25 12:38:03.313180] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.157 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.157 [2024-07-25 12:38:03.402377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:30.157 [2024-07-25 12:38:03.510573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.157 [2024-07-25 12:38:03.510636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.157 [2024-07-25 12:38:03.510647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.157 [2024-07-25 12:38:03.510657] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.157 [2024-07-25 12:38:03.510665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.157 [2024-07-25 12:38:03.510826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.157 [2024-07-25 12:38:03.510977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.157 [2024-07-25 12:38:03.511044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.157 [2024-07-25 12:38:03.511044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:30.728 [2024-07-25 12:38:03.970011] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.728 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.728 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:30.728 Malloc1 00:25:30.728 [2024-07-25 12:38:04.093033] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.728 Malloc2 00:25:30.989 Malloc3 00:25:30.989 Malloc4 00:25:30.989 Malloc5 00:25:30.989 Malloc6 00:25:30.989 Malloc7 00:25:30.989 Malloc8 00:25:31.251 Malloc9 00:25:31.251 Malloc10 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=504665 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 504665 /var/tmp/bdevperf.sock 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 504665 ']' 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:31.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.251 { 00:25:31.251 "params": { 00:25:31.251 "name": "Nvme$subsystem", 00:25:31.251 "trtype": "$TEST_TRANSPORT", 00:25:31.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.251 "adrfam": "ipv4", 00:25:31.251 "trsvcid": "$NVMF_PORT", 00:25:31.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.251 "hdgst": ${hdgst:-false}, 00:25:31.251 "ddgst": ${ddgst:-false} 00:25:31.251 }, 00:25:31.251 "method": "bdev_nvme_attach_controller" 00:25:31.251 } 00:25:31.251 EOF 00:25:31.251 )") 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.251 { 00:25:31.251 "params": { 00:25:31.251 "name": "Nvme$subsystem", 00:25:31.251 "trtype": "$TEST_TRANSPORT", 00:25:31.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.251 "adrfam": "ipv4", 00:25:31.251 "trsvcid": "$NVMF_PORT", 00:25:31.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.251 "hdgst": ${hdgst:-false}, 00:25:31.251 "ddgst": ${ddgst:-false} 00:25:31.251 }, 00:25:31.251 "method": "bdev_nvme_attach_controller" 00:25:31.251 } 00:25:31.251 EOF 00:25:31.251 )") 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.251 { 00:25:31.251 "params": { 00:25:31.251 "name": "Nvme$subsystem", 00:25:31.251 "trtype": "$TEST_TRANSPORT", 00:25:31.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.251 "adrfam": "ipv4", 00:25:31.251 "trsvcid": "$NVMF_PORT", 00:25:31.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.251 "hdgst": ${hdgst:-false}, 00:25:31.251 "ddgst": ${ddgst:-false} 00:25:31.251 }, 00:25:31.251 "method": "bdev_nvme_attach_controller" 00:25:31.251 } 00:25:31.251 EOF 00:25:31.251 )") 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.251 { 00:25:31.251 "params": { 00:25:31.251 "name": "Nvme$subsystem", 00:25:31.251 "trtype": "$TEST_TRANSPORT", 00:25:31.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.251 "adrfam": "ipv4", 00:25:31.251 "trsvcid": "$NVMF_PORT", 00:25:31.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.251 "hdgst": ${hdgst:-false}, 00:25:31.251 "ddgst": ${ddgst:-false} 00:25:31.251 }, 00:25:31.251 "method": "bdev_nvme_attach_controller" 00:25:31.251 } 00:25:31.251 EOF 00:25:31.251 )") 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.251 { 00:25:31.251 "params": { 00:25:31.251 "name": "Nvme$subsystem", 00:25:31.251 "trtype": "$TEST_TRANSPORT", 00:25:31.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.251 "adrfam": "ipv4", 00:25:31.251 "trsvcid": "$NVMF_PORT", 00:25:31.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.251 "hdgst": ${hdgst:-false}, 00:25:31.251 "ddgst": ${ddgst:-false} 00:25:31.251 }, 00:25:31.251 "method": "bdev_nvme_attach_controller" 00:25:31.251 } 00:25:31.251 EOF 00:25:31.251 )") 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.251 { 00:25:31.251 "params": { 00:25:31.251 "name": "Nvme$subsystem", 00:25:31.251 "trtype": "$TEST_TRANSPORT", 00:25:31.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.251 "adrfam": "ipv4", 00:25:31.251 "trsvcid": "$NVMF_PORT", 00:25:31.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.251 "hdgst": ${hdgst:-false}, 00:25:31.251 "ddgst": ${ddgst:-false} 00:25:31.251 }, 00:25:31.251 "method": "bdev_nvme_attach_controller" 00:25:31.251 } 00:25:31.251 EOF 00:25:31.251 )") 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:31.251 [2024-07-25 12:38:04.608310] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:25:31.251 [2024-07-25 12:38:04.608376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid504665 ] 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.251 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.251 { 00:25:31.251 "params": { 00:25:31.251 "name": "Nvme$subsystem", 00:25:31.251 "trtype": "$TEST_TRANSPORT", 00:25:31.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "$NVMF_PORT", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.252 "hdgst": ${hdgst:-false}, 00:25:31.252 "ddgst": ${ddgst:-false} 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 } 00:25:31.252 EOF 00:25:31.252 )") 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.252 { 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme$subsystem", 00:25:31.252 "trtype": "$TEST_TRANSPORT", 00:25:31.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "$NVMF_PORT", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.252 "hdgst": ${hdgst:-false}, 00:25:31.252 "ddgst": ${ddgst:-false} 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 } 00:25:31.252 EOF 00:25:31.252 )") 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.252 { 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme$subsystem", 00:25:31.252 "trtype": "$TEST_TRANSPORT", 00:25:31.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "$NVMF_PORT", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.252 "hdgst": ${hdgst:-false}, 00:25:31.252 "ddgst": ${ddgst:-false} 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 } 00:25:31.252 EOF 00:25:31.252 )") 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.252 { 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme$subsystem", 00:25:31.252 "trtype": "$TEST_TRANSPORT", 00:25:31.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "$NVMF_PORT", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.252 "hdgst": ${hdgst:-false}, 00:25:31.252 "ddgst": ${ddgst:-false} 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 } 00:25:31.252 EOF 00:25:31.252 )") 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:31.252 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:25:31.252 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme1", 00:25:31.252 "trtype": "tcp", 00:25:31.252 "traddr": "10.0.0.2", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "4420", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:31.252 "hdgst": false, 00:25:31.252 "ddgst": false 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 },{ 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme2", 00:25:31.252 "trtype": "tcp", 00:25:31.252 "traddr": "10.0.0.2", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "4420", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:31.252 "hdgst": false, 00:25:31.252 "ddgst": false 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 },{ 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme3", 00:25:31.252 "trtype": "tcp", 00:25:31.252 "traddr": "10.0.0.2", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "4420", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:31.252 "hdgst": false, 00:25:31.252 "ddgst": false 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 },{ 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme4", 00:25:31.252 "trtype": "tcp", 00:25:31.252 "traddr": "10.0.0.2", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "4420", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:31.252 "hdgst": false, 00:25:31.252 "ddgst": false 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 },{ 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme5", 00:25:31.252 "trtype": "tcp", 00:25:31.252 "traddr": "10.0.0.2", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "4420", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:31.252 "hdgst": false, 00:25:31.252 "ddgst": false 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 },{ 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme6", 00:25:31.252 "trtype": "tcp", 00:25:31.252 "traddr": "10.0.0.2", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "4420", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:31.252 "hdgst": false, 00:25:31.252 "ddgst": false 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 },{ 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme7", 00:25:31.252 "trtype": "tcp", 00:25:31.252 "traddr": "10.0.0.2", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "4420", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:31.252 "hdgst": false, 00:25:31.252 "ddgst": false 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 },{ 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme8", 00:25:31.252 "trtype": "tcp", 00:25:31.252 "traddr": "10.0.0.2", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "4420", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:31.252 "hdgst": false, 00:25:31.252 "ddgst": false 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 },{ 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme9", 00:25:31.252 "trtype": "tcp", 00:25:31.252 "traddr": "10.0.0.2", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "4420", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:31.252 "hdgst": false, 00:25:31.252 "ddgst": false 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 },{ 00:25:31.252 "params": { 00:25:31.252 "name": "Nvme10", 00:25:31.252 "trtype": "tcp", 00:25:31.252 "traddr": "10.0.0.2", 00:25:31.252 "adrfam": "ipv4", 00:25:31.252 "trsvcid": "4420", 00:25:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:31.252 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:31.252 "hdgst": false, 00:25:31.252 "ddgst": false 00:25:31.252 }, 00:25:31.252 "method": "bdev_nvme_attach_controller" 00:25:31.252 }' 00:25:31.514 [2024-07-25 12:38:04.697671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.514 [2024-07-25 12:38:04.792166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.899 Running I/O for 10 seconds... 00:25:32.899 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:32.899 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:32.899 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:32.899 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.899 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:32.899 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:33.159 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:33.159 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:33.159 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:33.159 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:33.159 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.160 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.160 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.160 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:25:33.160 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:25:33.160 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 504665 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 504665 ']' 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 504665 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 504665 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 504665' 00:25:33.420 killing process with pid 504665 00:25:33.420 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 504665 00:25:33.421 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 504665 00:25:33.682 Received shutdown signal, test time was about 0.993567 seconds 00:25:33.682 00:25:33.682 Latency(us) 00:25:33.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.682 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:33.682 Verification LBA range: start 0x0 length 0x400 00:25:33.682 Nvme1n1 : 0.96 200.24 12.51 0.00 0.00 314874.75 37506.76 314572.80 00:25:33.682 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:33.682 Verification LBA range: start 0x0 length 0x400 00:25:33.682 Nvme2n1 : 0.94 203.48 12.72 0.00 0.00 304675.71 29642.44 272629.76 00:25:33.682 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:33.682 Verification LBA range: start 0x0 length 0x400 00:25:33.682 Nvme3n1 : 0.97 264.22 16.51 0.00 0.00 229323.82 15123.69 235526.30 00:25:33.682 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:33.682 Verification LBA range: start 0x0 length 0x400 00:25:33.682 Nvme4n1 : 0.93 292.56 18.28 0.00 0.00 201235.92 6553.60 216167.98 00:25:33.682 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:33.682 Verification LBA range: start 0x0 length 0x400 00:25:33.682 Nvme5n1 : 0.97 197.88 12.37 0.00 0.00 294846.36 27021.00 298440.86 00:25:33.682 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:33.682 Verification LBA range: start 0x0 length 0x400 00:25:33.682 Nvme6n1 : 0.94 212.24 13.27 0.00 0.00 267449.52 3604.48 240365.88 00:25:33.682 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:33.682 Verification LBA range: start 0x0 length 0x400 00:25:33.682 Nvme7n1 : 0.99 257.87 16.12 0.00 0.00 215667.59 13611.32 256497.82 00:25:33.682 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:33.682 Verification LBA range: start 0x0 length 0x400 00:25:33.682 Nvme8n1 : 0.95 270.25 16.89 0.00 0.00 202106.09 22080.59 224233.94 00:25:33.682 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:33.682 Verification LBA range: start 0x0 length 0x400 00:25:33.682 Nvme9n1 : 0.99 193.58 12.10 0.00 0.00 278223.43 13308.85 340383.90 00:25:33.682 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:33.682 Verification LBA range: start 0x0 length 0x400 00:25:33.682 Nvme10n1 : 0.95 201.85 12.62 0.00 0.00 258805.23 32868.82 251658.24 00:25:33.682 =================================================================================================================== 00:25:33.682 Total : 2294.17 143.39 0.00 0.00 251154.74 3604.48 340383.90 00:25:33.682 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:25:35.066 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 504322 00:25:35.066 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:25:35.066 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:35.066 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:35.066 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:35.066 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:35.066 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:35.066 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:25:35.066 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.067 rmmod nvme_tcp 00:25:35.067 rmmod nvme_fabrics 00:25:35.067 rmmod nvme_keyring 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 504322 ']' 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 504322 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 504322 ']' 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 504322 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 504322 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 504322' 00:25:35.067 killing process with pid 504322 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 504322 00:25:35.067 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 504322 00:25:35.327 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:35.327 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:35.327 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:35.327 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.327 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:35.327 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.327 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.327 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.239 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:37.239 00:25:37.239 real 0m7.783s 00:25:37.239 user 0m22.672s 00:25:37.239 sys 0m1.691s 00:25:37.239 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:37.239 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.239 ************************************ 00:25:37.239 END TEST nvmf_shutdown_tc2 00:25:37.239 ************************************ 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:37.500 ************************************ 00:25:37.500 START TEST nvmf_shutdown_tc3 00:25:37.500 ************************************ 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.500 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:37.501 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:37.501 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:37.501 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:37.501 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.501 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.763 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:37.763 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:37.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:25:37.763 00:25:37.763 --- 10.0.0.2 ping statistics --- 00:25:37.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.763 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:25:37.763 00:25:37.763 --- 10.0.0.1 ping statistics --- 00:25:37.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.763 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=505727 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 505727 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 505727 ']' 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:37.763 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:37.763 [2024-07-25 12:38:11.162080] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:25:37.763 [2024-07-25 12:38:11.162143] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.024 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.024 [2024-07-25 12:38:11.256225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:38.024 [2024-07-25 12:38:11.366651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.024 [2024-07-25 12:38:11.366716] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.024 [2024-07-25 12:38:11.366727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.024 [2024-07-25 12:38:11.366737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.024 [2024-07-25 12:38:11.366746] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.024 [2024-07-25 12:38:11.366912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.024 [2024-07-25 12:38:11.367070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.024 [2024-07-25 12:38:11.367223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:38.024 [2024-07-25 12:38:11.367223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:39.010 [2024-07-25 12:38:12.087072] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.010 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:39.010 Malloc1 00:25:39.010 [2024-07-25 12:38:12.209940] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.010 Malloc2 00:25:39.010 Malloc3 00:25:39.010 Malloc4 00:25:39.010 Malloc5 00:25:39.297 Malloc6 00:25:39.297 Malloc7 00:25:39.297 Malloc8 00:25:39.297 Malloc9 00:25:39.297 Malloc10 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=506063 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 506063 /var/tmp/bdevperf.sock 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 506063 ']' 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:39.297 { 00:25:39.297 "params": { 00:25:39.297 "name": "Nvme$subsystem", 00:25:39.297 "trtype": "$TEST_TRANSPORT", 00:25:39.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.297 "adrfam": "ipv4", 00:25:39.297 "trsvcid": "$NVMF_PORT", 00:25:39.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.297 "hdgst": ${hdgst:-false}, 00:25:39.297 "ddgst": ${ddgst:-false} 00:25:39.297 }, 00:25:39.297 "method": "bdev_nvme_attach_controller" 00:25:39.297 } 00:25:39.297 EOF 00:25:39.297 )") 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:39.297 { 00:25:39.297 "params": { 00:25:39.297 "name": "Nvme$subsystem", 00:25:39.297 "trtype": "$TEST_TRANSPORT", 00:25:39.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.297 "adrfam": "ipv4", 00:25:39.297 "trsvcid": "$NVMF_PORT", 00:25:39.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.297 "hdgst": ${hdgst:-false}, 00:25:39.297 "ddgst": ${ddgst:-false} 00:25:39.297 }, 00:25:39.297 "method": "bdev_nvme_attach_controller" 00:25:39.297 } 00:25:39.297 EOF 00:25:39.297 )") 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:39.297 { 00:25:39.297 "params": { 00:25:39.297 "name": "Nvme$subsystem", 00:25:39.297 "trtype": "$TEST_TRANSPORT", 00:25:39.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.297 "adrfam": "ipv4", 00:25:39.297 "trsvcid": "$NVMF_PORT", 00:25:39.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.297 "hdgst": ${hdgst:-false}, 00:25:39.297 "ddgst": ${ddgst:-false} 00:25:39.297 }, 00:25:39.297 "method": "bdev_nvme_attach_controller" 00:25:39.297 } 00:25:39.297 EOF 00:25:39.297 )") 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:39.297 { 00:25:39.297 "params": { 00:25:39.297 "name": "Nvme$subsystem", 00:25:39.297 "trtype": "$TEST_TRANSPORT", 00:25:39.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.297 "adrfam": "ipv4", 00:25:39.297 "trsvcid": "$NVMF_PORT", 00:25:39.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.297 "hdgst": ${hdgst:-false}, 00:25:39.297 "ddgst": ${ddgst:-false} 00:25:39.297 }, 00:25:39.297 "method": "bdev_nvme_attach_controller" 00:25:39.297 } 00:25:39.297 EOF 00:25:39.297 )") 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:39.297 { 00:25:39.297 "params": { 00:25:39.297 "name": "Nvme$subsystem", 00:25:39.297 "trtype": "$TEST_TRANSPORT", 00:25:39.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.297 "adrfam": "ipv4", 00:25:39.297 "trsvcid": "$NVMF_PORT", 00:25:39.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.297 "hdgst": ${hdgst:-false}, 00:25:39.297 "ddgst": ${ddgst:-false} 00:25:39.297 }, 00:25:39.297 "method": "bdev_nvme_attach_controller" 00:25:39.297 } 00:25:39.297 EOF 00:25:39.297 )") 00:25:39.297 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:39.559 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:39.559 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:39.559 { 00:25:39.559 "params": { 00:25:39.559 "name": "Nvme$subsystem", 00:25:39.559 "trtype": "$TEST_TRANSPORT", 00:25:39.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.559 "adrfam": "ipv4", 00:25:39.559 "trsvcid": "$NVMF_PORT", 00:25:39.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.559 "hdgst": ${hdgst:-false}, 00:25:39.559 "ddgst": ${ddgst:-false} 00:25:39.559 }, 00:25:39.559 "method": "bdev_nvme_attach_controller" 00:25:39.559 } 00:25:39.559 EOF 00:25:39.559 )") 00:25:39.559 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:39.559 [2024-07-25 12:38:12.724078] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:25:39.559 [2024-07-25 12:38:12.724149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid506063 ] 00:25:39.559 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:39.559 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:39.559 { 00:25:39.559 "params": { 00:25:39.559 "name": "Nvme$subsystem", 00:25:39.559 "trtype": "$TEST_TRANSPORT", 00:25:39.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.559 "adrfam": "ipv4", 00:25:39.559 "trsvcid": "$NVMF_PORT", 00:25:39.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.559 "hdgst": ${hdgst:-false}, 00:25:39.559 "ddgst": ${ddgst:-false} 00:25:39.559 }, 00:25:39.559 "method": "bdev_nvme_attach_controller" 00:25:39.559 } 00:25:39.559 EOF 00:25:39.559 )") 00:25:39.559 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:39.559 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:39.559 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:39.559 { 00:25:39.559 "params": { 00:25:39.559 "name": "Nvme$subsystem", 00:25:39.559 "trtype": "$TEST_TRANSPORT", 00:25:39.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.559 "adrfam": "ipv4", 00:25:39.559 "trsvcid": "$NVMF_PORT", 00:25:39.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.559 "hdgst": ${hdgst:-false}, 00:25:39.559 "ddgst": ${ddgst:-false} 00:25:39.559 }, 00:25:39.559 "method": "bdev_nvme_attach_controller" 00:25:39.559 } 00:25:39.559 EOF 00:25:39.559 )") 00:25:39.560 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:39.560 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:39.560 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:39.560 { 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme$subsystem", 00:25:39.560 "trtype": "$TEST_TRANSPORT", 00:25:39.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "$NVMF_PORT", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.560 "hdgst": ${hdgst:-false}, 00:25:39.560 "ddgst": ${ddgst:-false} 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 } 00:25:39.560 EOF 00:25:39.560 )") 00:25:39.560 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:39.560 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:39.560 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:39.560 { 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme$subsystem", 00:25:39.560 "trtype": "$TEST_TRANSPORT", 00:25:39.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "$NVMF_PORT", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.560 "hdgst": ${hdgst:-false}, 00:25:39.560 "ddgst": ${ddgst:-false} 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 } 00:25:39.560 EOF 00:25:39.560 )") 00:25:39.560 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:39.560 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.560 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:25:39.560 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:25:39.560 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme1", 00:25:39.560 "trtype": "tcp", 00:25:39.560 "traddr": "10.0.0.2", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "4420", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:39.560 "hdgst": false, 00:25:39.560 "ddgst": false 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 },{ 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme2", 00:25:39.560 "trtype": "tcp", 00:25:39.560 "traddr": "10.0.0.2", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "4420", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:39.560 "hdgst": false, 00:25:39.560 "ddgst": false 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 },{ 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme3", 00:25:39.560 "trtype": "tcp", 00:25:39.560 "traddr": "10.0.0.2", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "4420", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:39.560 "hdgst": false, 00:25:39.560 "ddgst": false 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 },{ 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme4", 00:25:39.560 "trtype": "tcp", 00:25:39.560 "traddr": "10.0.0.2", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "4420", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:39.560 "hdgst": false, 00:25:39.560 "ddgst": false 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 },{ 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme5", 00:25:39.560 "trtype": "tcp", 00:25:39.560 "traddr": "10.0.0.2", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "4420", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:39.560 "hdgst": false, 00:25:39.560 "ddgst": false 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 },{ 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme6", 00:25:39.560 "trtype": "tcp", 00:25:39.560 "traddr": "10.0.0.2", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "4420", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:39.560 "hdgst": false, 00:25:39.560 "ddgst": false 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 },{ 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme7", 00:25:39.560 "trtype": "tcp", 00:25:39.560 "traddr": "10.0.0.2", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "4420", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:39.560 "hdgst": false, 00:25:39.560 "ddgst": false 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 },{ 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme8", 00:25:39.560 "trtype": "tcp", 00:25:39.560 "traddr": "10.0.0.2", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "4420", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:39.560 "hdgst": false, 00:25:39.560 "ddgst": false 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 },{ 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme9", 00:25:39.560 "trtype": "tcp", 00:25:39.560 "traddr": "10.0.0.2", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "4420", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:39.560 "hdgst": false, 00:25:39.560 "ddgst": false 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 },{ 00:25:39.560 "params": { 00:25:39.560 "name": "Nvme10", 00:25:39.560 "trtype": "tcp", 00:25:39.560 "traddr": "10.0.0.2", 00:25:39.560 "adrfam": "ipv4", 00:25:39.560 "trsvcid": "4420", 00:25:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:39.560 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:39.560 "hdgst": false, 00:25:39.560 "ddgst": false 00:25:39.560 }, 00:25:39.560 "method": "bdev_nvme_attach_controller" 00:25:39.560 }' 00:25:39.560 [2024-07-25 12:38:12.811737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.560 [2024-07-25 12:38:12.906003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.958 Running I/O for 10 seconds... 00:25:40.958 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.958 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:25:40.958 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:40.958 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.958 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:41.219 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:41.480 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:41.480 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:41.480 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:41.480 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:41.480 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.480 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:41.480 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.480 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:25:41.480 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:25:41.480 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:41.742 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:41.742 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:41.742 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:41.742 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.742 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:41.742 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:41.742 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=141 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 141 -ge 100 ']' 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 505727 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 505727 ']' 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 505727 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 505727 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 505727' 00:25:42.018 killing process with pid 505727 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 505727 00:25:42.018 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 505727 00:25:42.018 [2024-07-25 12:38:15.246888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.018 [2024-07-25 12:38:15.247046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.018 [2024-07-25 12:38:15.247074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.018 [2024-07-25 12:38:15.247095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.018 [2024-07-25 12:38:15.247115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.018 [2024-07-25 12:38:15.247135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.018 [2024-07-25 12:38:15.247155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.247982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.248563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741b40 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.252409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.252488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.252509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.252528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.252557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.252581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.252603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.019 [2024-07-25 12:38:15.252623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.252999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.253837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912c20 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.020 [2024-07-25 12:38:15.256622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.256990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.257503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1742000 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.263934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.021 [2024-07-25 12:38:15.263990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.021 [2024-07-25 12:38:15.264001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.021 [2024-07-25 12:38:15.264008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.021 [2024-07-25 12:38:15.264016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.021 [2024-07-25 12:38:15.264023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.021 [2024-07-25 12:38:15.264031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.021 [2024-07-25 12:38:15.264038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.021 [2024-07-25 12:38:15.264045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee510 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.264106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.021 [2024-07-25 12:38:15.264117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.021 [2024-07-25 12:38:15.264129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.021 [2024-07-25 12:38:15.264137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.021 [2024-07-25 12:38:15.264146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.021 [2024-07-25 12:38:15.264156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.021 [2024-07-25 12:38:15.264166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.021 [2024-07-25 12:38:15.264175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.021 [2024-07-25 12:38:15.264182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2229110 is same with the state(5) to be set 00:25:42.021 [2024-07-25 12:38:15.264215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.021 [2024-07-25 12:38:15.264229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.021 [2024-07-25 12:38:15.264238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.021 [2024-07-25 12:38:15.264249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.021 [2024-07-25 12:38:15.264259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.021 [2024-07-25 12:38:15.264270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.264279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.022 [2024-07-25 12:38:15.264285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.264292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d4d0 is same with the state(5) to be set 00:25:42.022 [2024-07-25 12:38:15.264315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.022 [2024-07-25 12:38:15.264324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.264332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.022 [2024-07-25 12:38:15.264338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.264346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.022 [2024-07-25 12:38:15.264353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.264361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.022 [2024-07-25 12:38:15.264367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.264373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236810 is same with the state(5) to be set 00:25:42.022 [2024-07-25 12:38:15.264858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.264894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.264914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.264922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.264933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.264940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.264951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.264959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.264975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.264984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.264993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.022 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.022 [2024-07-25 12:38:15.265303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.022 [2024-07-25 12:38:15.265316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.022 [2024-07-25 12:38:15.265328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.022 [2024-07-25 12:38:15.265335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.022 [2024-07-25 12:38:15.265340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.022 [2024-07-25 12:38:15.265347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.022 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1the state(5) to be set 00:25:42.023 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.265405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1[2024-07-25 12:38:15.265423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.265441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1the state(5) to be set 00:25:42.023 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.023 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1the state(5) to be set 00:25:42.023 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.023 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.023 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1the state(5) to be set 00:25:42.023 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.265637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1the state(5) to be set 00:25:42.023 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.023 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.265695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1the state(5) to be set 00:25:42.023 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.265728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1the state(5) to be set 00:25:42.023 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.023 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.023 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.023 [2024-07-25 12:38:15.265880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.023 [2024-07-25 12:38:15.265885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.023 [2024-07-25 12:38:15.265892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.265898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.265904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.265907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.265916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.265919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.265929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.265929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.265946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.265947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1the state(5) to be set 00:25:42.025 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.265963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.265974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.265981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.265982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.265993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-07-25 12:38:15.265994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1[2024-07-25 12:38:15.266022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.266078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.025 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.266093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1the state(5) to be set 00:25:42.025 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.266107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-07-25 12:38:15.266124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with the state(5) to be set 00:25:42.025 [2024-07-25 12:38:15.266235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17429a0 is same with [2024-07-25 12:38:15.266248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1the state(5) to be set 00:25:42.025 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.025 [2024-07-25 12:38:15.266375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.025 [2024-07-25 12:38:15.266383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.266418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:42.026 [2024-07-25 12:38:15.266486] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23705a0 was disconnected and freed. reset controller. 00:25:42.026 [2024-07-25 12:38:15.270117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.270372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:12the state(5) to be set 00:25:42.026 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.270568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.026 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.270593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.026 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.270618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.026 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.270642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.026 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.026 [2024-07-25 12:38:15.270744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.026 [2024-07-25 12:38:15.270756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.026 [2024-07-25 12:38:15.270760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.270764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.026 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.270781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.270789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.270785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.270804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.270815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.270811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.270830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.270838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.270837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.270849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.270860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.270859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.270875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.270884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.270882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.270898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.270906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.270905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.270918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.270928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.270929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.270937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.270949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.270951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.270959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.270971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.270973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.270981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.270992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.270996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.271048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1the state(5) to be set 00:25:42.027 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.271081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1the state(5) to be set 00:25:42.027 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.271270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.271294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 the state(5) to be set 00:25:42.027 [2024-07-25 12:38:15.271316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.271325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:42.027 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.027 [2024-07-25 12:38:15.271342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.027 [2024-07-25 12:38:15.271342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.028 [2024-07-25 12:38:15.271352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.028 [2024-07-25 12:38:15.271374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.028 [2024-07-25 12:38:15.271405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.028 [2024-07-25 12:38:15.271425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.271446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1the state(5) to be set 00:25:42.028 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.271483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1the state(5) to be set 00:25:42.028 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.028 [2024-07-25 12:38:15.271509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.028 [2024-07-25 12:38:15.271528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with [2024-07-25 12:38:15.271556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1the state(5) to be set 00:25:42.028 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.028 [2024-07-25 12:38:15.271584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743320 is same with the state(5) to be set 00:25:42.028 [2024-07-25 12:38:15.271606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.028 [2024-07-25 12:38:15.271803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.271884] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2223d30 was disconnected and freed. reset controller. 00:25:42.028 [2024-07-25 12:38:15.272645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:42.028 [2024-07-25 12:38:15.272696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236810 (9): Bad file descriptor 00:25:42.028 [2024-07-25 12:38:15.274346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:42.028 [2024-07-25 12:38:15.274411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230a3c0 (9): Bad file descriptor 00:25:42.028 [2024-07-25 12:38:15.274471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ee510 (9): Bad file descriptor 00:25:42.028 [2024-07-25 12:38:15.274505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.028 [2024-07-25 12:38:15.274517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.274528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.028 [2024-07-25 12:38:15.274544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.274563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.028 [2024-07-25 12:38:15.274570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.274581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.028 [2024-07-25 12:38:15.274590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.274597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2258740 is same with the state(5) to be set 00:25:42.028 [2024-07-25 12:38:15.274621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2229110 (9): Bad file descriptor 00:25:42.028 [2024-07-25 12:38:15.274637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224d4d0 (9): Bad file descriptor 00:25:42.028 [2024-07-25 12:38:15.274662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.028 [2024-07-25 12:38:15.274673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.028 [2024-07-25 12:38:15.274682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.029 [2024-07-25 12:38:15.274690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.274698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.029 [2024-07-25 12:38:15.274705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.274714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.029 [2024-07-25 12:38:15.274721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.274728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254f40 is same with the state(5) to be set 00:25:42.029 [2024-07-25 12:38:15.275418] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:42.029 [2024-07-25 12:38:15.275741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.029 [2024-07-25 12:38:15.275767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236810 with addr=10.0.0.2, port=4420 00:25:42.029 [2024-07-25 12:38:15.275778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236810 is same with the state(5) to be set 00:25:42.029 [2024-07-25 12:38:15.275841] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:42.029 [2024-07-25 12:38:15.275875] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:42.029 [2024-07-25 12:38:15.276569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.029 [2024-07-25 12:38:15.276598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230a3c0 with addr=10.0.0.2, port=4420 00:25:42.029 [2024-07-25 12:38:15.276609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230a3c0 is same with the state(5) to be set 00:25:42.029 [2024-07-25 12:38:15.276622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236810 (9): Bad file descriptor 00:25:42.029 [2024-07-25 12:38:15.276690] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:42.029 [2024-07-25 12:38:15.276812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230a3c0 (9): Bad file descriptor 00:25:42.029 [2024-07-25 12:38:15.276834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:42.029 [2024-07-25 12:38:15.276844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:42.029 [2024-07-25 12:38:15.276857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:42.029 [2024-07-25 12:38:15.276946] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:42.029 [2024-07-25 12:38:15.277009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.029 [2024-07-25 12:38:15.277021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:42.029 [2024-07-25 12:38:15.277028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:42.029 [2024-07-25 12:38:15.277036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:42.029 [2024-07-25 12:38:15.277118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.029 [2024-07-25 12:38:15.284426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2258740 (9): Bad file descriptor 00:25:42.029 [2024-07-25 12:38:15.284491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254f40 (9): Bad file descriptor 00:25:42.029 [2024-07-25 12:38:15.284646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.284982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.284992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.029 [2024-07-25 12:38:15.285001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.029 [2024-07-25 12:38:15.285010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.030 [2024-07-25 12:38:15.285573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.030 [2024-07-25 12:38:15.285584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.285591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.285600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.285606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.285615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.285623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.285634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.285641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.285650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.285657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.285667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.285674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.285683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.285690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.285698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.285704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.285715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.285723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.285731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361f70 is same with the state(5) to be set 00:25:42.031 [2024-07-25 12:38:15.286994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.031 [2024-07-25 12:38:15.287478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.031 [2024-07-25 12:38:15.287487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.287983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.287992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.288000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.288009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.288016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.288024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.288030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.288040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.288047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.288056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.288062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.288071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.288082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.288091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236f0f0 is same with the state(5) to be set 00:25:42.032 [2024-07-25 12:38:15.289445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.032 [2024-07-25 12:38:15.289472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.032 [2024-07-25 12:38:15.289488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.289985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.289994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.290000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.290010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.290017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.290025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.290032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.290040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.290048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.290057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.290064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.290072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.290079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.290091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.290101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.290110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.290117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.290126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.033 [2024-07-25 12:38:15.290133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.033 [2024-07-25 12:38:15.290143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.034 [2024-07-25 12:38:15.290538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.034 [2024-07-25 12:38:15.290552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359eb0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.034 [2024-07-25 12:38:15.292240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:42.034 [2024-07-25 12:38:15.292235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:42.034 [2024-07-25 12:38:15.292261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.034 [2024-07-25 12:38:15.292455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:42.035 [2024-07-25 12:38:15.292474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.035 [2024-07-25 12:38:15.292872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with [2024-07-25 12:38:15.292879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2229110 witthe state(5) to be set 00:25:42.035 h addr=10.0.0.2, port=4420 00:25:42.035 [2024-07-25 12:38:15.292894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2229110 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.292996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.035 [2024-07-25 12:38:15.293096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224d4d0 with addr=10.0.0.2, port=4420 00:25:42.035 [2024-07-25 12:38:15.293116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d4d0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with [2024-07-25 12:38:15.293326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.035 the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ee510 with addr=10.0.0.2, port=4420 00:25:42.035 [2024-07-25 12:38:15.293346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with [2024-07-25 12:38:15.293352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee510 is same the state(5) to be set 00:25:42.035 with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.293429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17437e0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.294280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:42.035 [2024-07-25 12:38:15.294645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.035 [2024-07-25 12:38:15.294663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236810 with addr=10.0.0.2, port=4420 00:25:42.035 [2024-07-25 12:38:15.294672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236810 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.294683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2229110 (9): Bad file descriptor 00:25:42.035 [2024-07-25 12:38:15.294695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224d4d0 (9): Bad file descriptor 00:25:42.035 [2024-07-25 12:38:15.294705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ee510 (9): Bad file descriptor 00:25:42.035 [2024-07-25 12:38:15.295119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.035 [2024-07-25 12:38:15.295137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230a3c0 with addr=10.0.0.2, port=4420 00:25:42.035 [2024-07-25 12:38:15.295145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230a3c0 is same with the state(5) to be set 00:25:42.035 [2024-07-25 12:38:15.295156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236810 (9): Bad file descriptor 00:25:42.035 [2024-07-25 12:38:15.295166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.035 [2024-07-25 12:38:15.295173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.035 [2024-07-25 12:38:15.295182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.035 [2024-07-25 12:38:15.295195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:42.035 [2024-07-25 12:38:15.295202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:42.035 [2024-07-25 12:38:15.295211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:42.035 [2024-07-25 12:38:15.295222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:42.035 [2024-07-25 12:38:15.295228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:42.035 [2024-07-25 12:38:15.295235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:42.035 [2024-07-25 12:38:15.295264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.035 [2024-07-25 12:38:15.295280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.035 [2024-07-25 12:38:15.295289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.035 [2024-07-25 12:38:15.295297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.035 [2024-07-25 12:38:15.295307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.035 [2024-07-25 12:38:15.295314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.035 [2024-07-25 12:38:15.295322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.036 [2024-07-25 12:38:15.295329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c610 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.036 [2024-07-25 12:38:15.295389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.036 [2024-07-25 12:38:15.295407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.036 [2024-07-25 12:38:15.295422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.036 [2024-07-25 12:38:15.295440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22657e0 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.295603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.036 the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.036 [2024-07-25 12:38:15.295624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.036 [2024-07-25 12:38:15.295647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.295653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230a3c0 (9): the state(5) to be set 00:25:42.036 Bad file descriptor 00:25:42.036 [2024-07-25 12:38:15.295668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:42.036 [2024-07-25 12:38:15.295670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.295675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] contrthe state(5) to be set 00:25:42.036 oller reinitialization failed 00:25:42.036 [2024-07-25 12:38:15.295691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:42.036 [2024-07-25 12:38:15.295693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.295759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128the state(5) to be set 00:25:42.036 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.295977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.295992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.295992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.036 [2024-07-25 12:38:15.296001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.296010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.296012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.296020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12the state(5) to be set 00:25:42.036 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.296035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.296037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.296046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:12the state(5) to be set 00:25:42.036 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.296081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.036 [2024-07-25 12:38:15.296084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.296091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12the state(5) to be set 00:25:42.036 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.036 [2024-07-25 12:38:15.296104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.296114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:12the state(5) to be set 00:25:42.037 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.296158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12the state(5) to be set 00:25:42.037 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.296394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:12the state(5) to be set 00:25:42.037 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.296418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:12the state(5) to be set 00:25:42.037 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with [2024-07-25 12:38:15.296442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:12the state(5) to be set 00:25:42.037 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 [2024-07-25 12:38:15.296622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.296642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 12:38:15.296669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.037 the state(5) to be set 00:25:42.037 [2024-07-25 12:38:15.296690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.037 [2024-07-25 12:38:15.296699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.038 [2024-07-25 12:38:15.296711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.038 [2024-07-25 12:38:15.296731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.038 [2024-07-25 12:38:15.296752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.038 [2024-07-25 12:38:15.296771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.038 [2024-07-25 12:38:15.296792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.038 [2024-07-25 12:38:15.296813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.038 [2024-07-25 12:38:15.296834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.038 [2024-07-25 12:38:15.296861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912760 is same with the state(5) to be set 00:25:42.038 [2024-07-25 12:38:15.296884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.296984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.296991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.297000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.297007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.297017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.308499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.308591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.308604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.308626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.308634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.308644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.308653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.308662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.308670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.308679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.308687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.308696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2371ae0 is same with the state(5) to be set 00:25:42.038 [2024-07-25 12:38:15.310027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.310048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.310064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.310074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.310085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.310094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.310105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.310112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.310122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.310129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.310139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.310147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.310157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.310165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.310178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.310186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.310201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.310213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.310224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.310233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.038 [2024-07-25 12:38:15.310242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.038 [2024-07-25 12:38:15.310251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.039 [2024-07-25 12:38:15.310811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.039 [2024-07-25 12:38:15.310820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.310829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.310838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.310846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.310854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.310865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.310875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.310884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.310894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.310903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.310913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.310920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.310930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.310938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.310948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.310955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.310964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.310972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.310982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.310989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.310998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.311009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.311019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.311026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.311034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.311042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.311051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.311058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.311067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.311075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.311087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.311094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.311104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.311112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.311121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.311128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.311137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.311144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.311153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.311161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.311171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.311178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.311187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22250c0 is same with the state(5) to be set 00:25:42.040 [2024-07-25 12:38:15.312576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.040 [2024-07-25 12:38:15.312608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:42.040 [2024-07-25 12:38:15.312625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:42.040 [2024-07-25 12:38:15.312661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:42.040 [2024-07-25 12:38:15.312670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:42.040 [2024-07-25 12:38:15.312682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:42.040 [2024-07-25 12:38:15.312747] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:42.040 [2024-07-25 12:38:15.312765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c610 (9): Bad file descriptor 00:25:42.040 [2024-07-25 12:38:15.312808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.040 [2024-07-25 12:38:15.312821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.312833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.040 [2024-07-25 12:38:15.312841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.312849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.040 [2024-07-25 12:38:15.312857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.312871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.040 [2024-07-25 12:38:15.312880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.312888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ebd50 is same with the state(5) to be set 00:25:42.040 [2024-07-25 12:38:15.312905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22657e0 (9): Bad file descriptor 00:25:42.040 [2024-07-25 12:38:15.312976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.312990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.313002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.313011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.313021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.313030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.313040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.313047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.313060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.313069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.313080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.313089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.313099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.313108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.313119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-25 12:38:15.313128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.040 [2024-07-25 12:38:15.313138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.041 [2024-07-25 12:38:15.313834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.041 [2024-07-25 12:38:15.313845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.313853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.313862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.313872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.313888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.313899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.313912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.313922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.313934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.313943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.313956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.313966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.313979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.313990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.314013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.314036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.314058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.314081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.314103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.314125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.314146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.314171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.314192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.314216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.042 [2024-07-25 12:38:15.314240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.042 [2024-07-25 12:38:15.314251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2840 is same with the state(5) to be set 00:25:42.042 [2024-07-25 12:38:15.314315] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22d2840 was disconnected and freed. reset controller. 00:25:42.042 [2024-07-25 12:38:15.314398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.042 [2024-07-25 12:38:15.314934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.042 [2024-07-25 12:38:15.314997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2258740 with addr=10.0.0.2, port=4420 00:25:42.042 [2024-07-25 12:38:15.315014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2258740 is same with the state(5) to be set 00:25:42.042 [2024-07-25 12:38:15.315395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.042 [2024-07-25 12:38:15.315412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254f40 with addr=10.0.0.2, port=4420 00:25:42.042 [2024-07-25 12:38:15.315423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254f40 is same with the state(5) to be set 00:25:42.042 [2024-07-25 12:38:15.317793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:42.042 [2024-07-25 12:38:15.317823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:42.042 [2024-07-25 12:38:15.317837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.042 [2024-07-25 12:38:15.317851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:42.042 [2024-07-25 12:38:15.317863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:42.042 [2024-07-25 12:38:15.317926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2258740 (9): Bad file descriptor 00:25:42.042 [2024-07-25 12:38:15.317943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254f40 (9): Bad file descriptor 00:25:42.042 [2024-07-25 12:38:15.318367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.042 [2024-07-25 12:38:15.318391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ee510 with addr=10.0.0.2, port=4420 00:25:42.042 [2024-07-25 12:38:15.318402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee510 is same with the state(5) to be set 00:25:42.042 [2024-07-25 12:38:15.318844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.042 [2024-07-25 12:38:15.318905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224d4d0 with addr=10.0.0.2, port=4420 00:25:42.042 [2024-07-25 12:38:15.318923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d4d0 is same with the state(5) to be set 00:25:42.042 [2024-07-25 12:38:15.319316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.042 [2024-07-25 12:38:15.319335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2229110 with addr=10.0.0.2, port=4420 00:25:42.042 [2024-07-25 12:38:15.319345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2229110 is same with the state(5) to be set 00:25:42.042 [2024-07-25 12:38:15.319598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.042 [2024-07-25 12:38:15.319633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236810 with addr=10.0.0.2, port=4420 00:25:42.042 [2024-07-25 12:38:15.319644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236810 is same with the state(5) to be set 00:25:42.042 [2024-07-25 12:38:15.319997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.042 [2024-07-25 12:38:15.320013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c610 with addr=10.0.0.2, port=4420 00:25:42.042 [2024-07-25 12:38:15.320023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c610 is same with the state(5) to be set 00:25:42.042 [2024-07-25 12:38:15.320033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:42.042 [2024-07-25 12:38:15.320041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:42.042 [2024-07-25 12:38:15.320053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:42.042 [2024-07-25 12:38:15.320075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:42.042 [2024-07-25 12:38:15.320083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:42.042 [2024-07-25 12:38:15.320092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:42.042 [2024-07-25 12:38:15.320545] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:42.042 [2024-07-25 12:38:15.320585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.042 [2024-07-25 12:38:15.320595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.042 [2024-07-25 12:38:15.320609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ee510 (9): Bad file descriptor 00:25:42.042 [2024-07-25 12:38:15.320623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224d4d0 (9): Bad file descriptor 00:25:42.042 [2024-07-25 12:38:15.320635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2229110 (9): Bad file descriptor 00:25:42.042 [2024-07-25 12:38:15.320647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236810 (9): Bad file descriptor 00:25:42.042 [2024-07-25 12:38:15.320658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c610 (9): Bad file descriptor 00:25:42.042 [2024-07-25 12:38:15.320718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:42.042 [2024-07-25 12:38:15.320730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:42.043 [2024-07-25 12:38:15.320740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:42.043 [2024-07-25 12:38:15.320754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:42.043 [2024-07-25 12:38:15.320762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:42.043 [2024-07-25 12:38:15.320771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:42.043 [2024-07-25 12:38:15.320783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.043 [2024-07-25 12:38:15.320798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.043 [2024-07-25 12:38:15.320808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.043 [2024-07-25 12:38:15.320820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:42.043 [2024-07-25 12:38:15.320831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:42.043 [2024-07-25 12:38:15.320840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:42.043 [2024-07-25 12:38:15.320852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:42.043 [2024-07-25 12:38:15.320861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:42.043 [2024-07-25 12:38:15.320870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:42.043 [2024-07-25 12:38:15.320908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.043 [2024-07-25 12:38:15.320920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.043 [2024-07-25 12:38:15.320928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.043 [2024-07-25 12:38:15.320937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.043 [2024-07-25 12:38:15.320946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.043 [2024-07-25 12:38:15.322626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ebd50 (9): Bad file descriptor 00:25:42.043 [2024-07-25 12:38:15.322717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:42.043 [2024-07-25 12:38:15.322733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:42.043 [2024-07-25 12:38:15.322784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:42.043 [2024-07-25 12:38:15.322799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:42.043 [2024-07-25 12:38:15.323038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.043 [2024-07-25 12:38:15.323057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230a3c0 with addr=10.0.0.2, port=4420 00:25:42.043 [2024-07-25 12:38:15.323067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230a3c0 is same with the state(5) to be set 00:25:42.043 [2024-07-25 12:38:15.323256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.043 [2024-07-25 12:38:15.323270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22657e0 with addr=10.0.0.2, port=4420 00:25:42.043 [2024-07-25 12:38:15.323280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22657e0 is same with the state(5) to be set 00:25:42.043 [2024-07-25 12:38:15.323630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.043 [2024-07-25 12:38:15.323648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254f40 with addr=10.0.0.2, port=4420 00:25:42.043 [2024-07-25 12:38:15.323657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254f40 is same with the state(5) to be set 00:25:42.043 [2024-07-25 12:38:15.323855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.043 [2024-07-25 12:38:15.323873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2258740 with addr=10.0.0.2, port=4420 00:25:42.043 [2024-07-25 12:38:15.323883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2258740 is same with the state(5) to be set 00:25:42.043 [2024-07-25 12:38:15.323902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230a3c0 (9): Bad file descriptor 00:25:42.043 [2024-07-25 12:38:15.323916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22657e0 (9): Bad file descriptor 00:25:42.043 [2024-07-25 12:38:15.323944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254f40 (9): Bad file descriptor 00:25:42.043 [2024-07-25 12:38:15.323960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2258740 (9): Bad file descriptor 00:25:42.043 [2024-07-25 12:38:15.323972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:42.043 [2024-07-25 12:38:15.323981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:42.043 [2024-07-25 12:38:15.323992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:42.043 [2024-07-25 12:38:15.324005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:42.043 [2024-07-25 12:38:15.324015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:42.043 [2024-07-25 12:38:15.324024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:42.043 [2024-07-25 12:38:15.324052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.043 [2024-07-25 12:38:15.324066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.043 [2024-07-25 12:38:15.324075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:42.043 [2024-07-25 12:38:15.324083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:42.043 [2024-07-25 12:38:15.324092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:42.043 [2024-07-25 12:38:15.324105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:42.043 [2024-07-25 12:38:15.324114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:42.043 [2024-07-25 12:38:15.324123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:42.043 [2024-07-25 12:38:15.324150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.043 [2024-07-25 12:38:15.324161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.043 [2024-07-25 12:38:15.327987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:42.043 [2024-07-25 12:38:15.328029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:42.043 [2024-07-25 12:38:15.328076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.043 [2024-07-25 12:38:15.328087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:42.043 [2024-07-25 12:38:15.328097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:42.043 [2024-07-25 12:38:15.328354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.043 [2024-07-25 12:38:15.328370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c610 with addr=10.0.0.2, port=4420 00:25:42.043 [2024-07-25 12:38:15.328379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c610 is same with the state(5) to be set 00:25:42.043 [2024-07-25 12:38:15.328693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.043 [2024-07-25 12:38:15.328707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236810 with addr=10.0.0.2, port=4420 00:25:42.043 [2024-07-25 12:38:15.328716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236810 is same with the state(5) to be set 00:25:42.043 [2024-07-25 12:38:15.329076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.043 [2024-07-25 12:38:15.329092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2229110 with addr=10.0.0.2, port=4420 00:25:42.043 [2024-07-25 12:38:15.329100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2229110 is same with the state(5) to be set 00:25:42.043 [2024-07-25 12:38:15.329409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.043 [2024-07-25 12:38:15.329422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224d4d0 with addr=10.0.0.2, port=4420 00:25:42.043 [2024-07-25 12:38:15.329431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d4d0 is same with the state(5) to be set 00:25:42.043 [2024-07-25 12:38:15.329731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.043 [2024-07-25 12:38:15.329745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ee510 with addr=10.0.0.2, port=4420 00:25:42.043 [2024-07-25 12:38:15.329754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee510 is same with the state(5) to be set 00:25:42.043 [2024-07-25 12:38:15.329764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c610 (9): Bad file descriptor 00:25:42.043 [2024-07-25 12:38:15.329774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236810 (9): Bad file descriptor 00:25:42.043 [2024-07-25 12:38:15.329799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2229110 (9): Bad file descriptor 00:25:42.043 [2024-07-25 12:38:15.329809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224d4d0 (9): Bad file descriptor 00:25:42.043 [2024-07-25 12:38:15.329820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ee510 (9): Bad file descriptor 00:25:42.043 [2024-07-25 12:38:15.329829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:42.043 [2024-07-25 12:38:15.329837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:42.043 [2024-07-25 12:38:15.329846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:42.043 [2024-07-25 12:38:15.329857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:42.043 [2024-07-25 12:38:15.329866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:42.043 [2024-07-25 12:38:15.329873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:42.043 [2024-07-25 12:38:15.329897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.043 [2024-07-25 12:38:15.329906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.043 [2024-07-25 12:38:15.329913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.044 [2024-07-25 12:38:15.329921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.044 [2024-07-25 12:38:15.329930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.044 [2024-07-25 12:38:15.329941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:42.044 [2024-07-25 12:38:15.329949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:42.044 [2024-07-25 12:38:15.329957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:42.044 [2024-07-25 12:38:15.329969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:42.044 [2024-07-25 12:38:15.329977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:42.044 [2024-07-25 12:38:15.329989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:42.044 [2024-07-25 12:38:15.330024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.044 [2024-07-25 12:38:15.330034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.044 [2024-07-25 12:38:15.330042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.044 [2024-07-25 12:38:15.332699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.332985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.332994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.044 [2024-07-25 12:38:15.333259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.044 [2024-07-25 12:38:15.333268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.333978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.333991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.334001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.334012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.334022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.334033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.334043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.334056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.334065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.334077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.334085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.334096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.045 [2024-07-25 12:38:15.334105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.045 [2024-07-25 12:38:15.334116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4eb0 is same with the state(5) to be set 00:25:42.046 task offset: 24960 on job bdev=Nvme3n1 fails 00:25:42.046 00:25:42.046 Latency(us) 00:25:42.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.046 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.046 Job: Nvme1n1 ended in about 0.94 seconds with error 00:25:42.046 Verification LBA range: start 0x0 length 0x400 00:25:42.046 Nvme1n1 : 0.94 146.18 9.14 67.80 0.00 296041.04 29037.49 266176.98 00:25:42.046 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.046 Job: Nvme2n1 ended in about 0.95 seconds with error 00:25:42.046 Verification LBA range: start 0x0 length 0x400 00:25:42.046 Nvme2n1 : 0.95 135.25 8.45 67.63 0.00 306219.59 33473.77 269403.37 00:25:42.046 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.046 Job: Nvme3n1 ended in about 0.93 seconds with error 00:25:42.046 Verification LBA range: start 0x0 length 0x400 00:25:42.046 Nvme3n1 : 0.93 207.03 12.94 69.01 0.00 220459.57 4587.52 258111.02 00:25:42.046 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.046 Job: Nvme4n1 ended in about 0.97 seconds with error 00:25:42.046 Verification LBA range: start 0x0 length 0x400 00:25:42.046 Nvme4n1 : 0.97 198.56 12.41 66.19 0.00 225903.06 22887.19 248431.85 00:25:42.046 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.046 Job: Nvme5n1 ended in about 0.93 seconds with error 00:25:42.046 Verification LBA range: start 0x0 length 0x400 00:25:42.046 Nvme5n1 : 0.93 206.20 12.89 68.73 0.00 212508.16 4990.82 219394.36 00:25:42.046 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.046 Job: Nvme6n1 ended in about 0.97 seconds with error 00:25:42.046 Verification LBA range: start 0x0 length 0x400 00:25:42.046 Nvme6n1 : 0.97 132.04 8.25 66.02 0.00 290182.70 28230.89 248431.85 00:25:42.046 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.046 Job: Nvme7n1 ended in about 0.97 seconds with error 00:25:42.046 Verification LBA range: start 0x0 length 0x400 00:25:42.046 Nvme7n1 : 0.97 197.03 12.31 65.68 0.00 214410.63 27021.00 232299.91 00:25:42.046 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.046 Verification LBA range: start 0x0 length 0x400 00:25:42.046 Nvme8n1 : 0.94 272.60 17.04 0.00 0.00 200950.74 21475.64 245205.46 00:25:42.046 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.046 Job: Nvme9n1 ended in about 0.99 seconds with error 00:25:42.046 Verification LBA range: start 0x0 length 0x400 00:25:42.046 Nvme9n1 : 0.99 128.99 8.06 64.49 0.00 280103.38 44161.18 306506.83 00:25:42.046 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:42.046 Job: Nvme10n1 ended in about 0.95 seconds with error 00:25:42.046 Verification LBA range: start 0x0 length 0x400 00:25:42.046 Nvme10n1 : 0.95 134.90 8.43 67.45 0.00 259795.10 42951.29 287148.50 00:25:42.046 =================================================================================================================== 00:25:42.046 Total : 1758.80 109.92 603.00 0.00 245765.99 4587.52 306506.83 00:25:42.046 [2024-07-25 12:38:15.363237] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:42.046 [2024-07-25 12:38:15.363305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:42.046 [2024-07-25 12:38:15.363888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.046 [2024-07-25 12:38:15.363914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ebd50 with addr=10.0.0.2, port=4420 00:25:42.046 [2024-07-25 12:38:15.363925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ebd50 is same with the state(5) to be set 00:25:42.046 [2024-07-25 12:38:15.363976] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:42.046 [2024-07-25 12:38:15.363988] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:42.046 [2024-07-25 12:38:15.363998] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:42.046 [2024-07-25 12:38:15.364008] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:42.046 [2024-07-25 12:38:15.364297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:42.046 [2024-07-25 12:38:15.364312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:42.046 [2024-07-25 12:38:15.364321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:42.046 [2024-07-25 12:38:15.364329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:42.046 [2024-07-25 12:38:15.364388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ebd50 (9): Bad file descriptor 00:25:42.046 [2024-07-25 12:38:15.364694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:42.046 [2024-07-25 12:38:15.364710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:42.046 [2024-07-25 12:38:15.364721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:42.046 [2024-07-25 12:38:15.365045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.046 [2024-07-25 12:38:15.365061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22657e0 with addr=10.0.0.2, port=4420 00:25:42.046 [2024-07-25 12:38:15.365069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22657e0 is same with the state(5) to be set 00:25:42.046 [2024-07-25 12:38:15.365384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.046 [2024-07-25 12:38:15.365395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230a3c0 with addr=10.0.0.2, port=4420 00:25:42.046 [2024-07-25 12:38:15.365402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230a3c0 is same with the state(5) to be set 00:25:42.046 [2024-07-25 12:38:15.365702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.046 [2024-07-25 12:38:15.365715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2258740 with addr=10.0.0.2, port=4420 00:25:42.046 [2024-07-25 12:38:15.365722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2258740 is same with the state(5) to be set 00:25:42.046 [2024-07-25 12:38:15.366021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.046 [2024-07-25 12:38:15.366034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2254f40 with addr=10.0.0.2, port=4420 00:25:42.046 [2024-07-25 12:38:15.366048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2254f40 is same with the state(5) to be set 00:25:42.046 [2024-07-25 12:38:15.366056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:42.046 [2024-07-25 12:38:15.366063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:42.046 [2024-07-25 12:38:15.366072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:42.046 [2024-07-25 12:38:15.366106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:42.046 [2024-07-25 12:38:15.366116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.046 [2024-07-25 12:38:15.366137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.046 [2024-07-25 12:38:15.366363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.046 [2024-07-25 12:38:15.366375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236810 with addr=10.0.0.2, port=4420 00:25:42.046 [2024-07-25 12:38:15.366382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236810 is same with the state(5) to be set 00:25:42.046 [2024-07-25 12:38:15.366665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.046 [2024-07-25 12:38:15.366676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c610 with addr=10.0.0.2, port=4420 00:25:42.046 [2024-07-25 12:38:15.366683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c610 is same with the state(5) to be set 00:25:42.046 [2024-07-25 12:38:15.367006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.046 [2024-07-25 12:38:15.367017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ee510 with addr=10.0.0.2, port=4420 00:25:42.046 [2024-07-25 12:38:15.367024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee510 is same with the state(5) to be set 00:25:42.046 [2024-07-25 12:38:15.367034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22657e0 (9): Bad file descriptor 00:25:42.046 [2024-07-25 12:38:15.367044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230a3c0 (9): Bad file descriptor 00:25:42.046 [2024-07-25 12:38:15.367053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2258740 (9): Bad file descriptor 00:25:42.046 [2024-07-25 12:38:15.367061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254f40 (9): Bad file descriptor 00:25:42.046 [2024-07-25 12:38:15.367273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.046 [2024-07-25 12:38:15.367285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224d4d0 with addr=10.0.0.2, port=4420 00:25:42.046 [2024-07-25 12:38:15.367292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d4d0 is same with the state(5) to be set 00:25:42.046 [2024-07-25 12:38:15.367484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.046 [2024-07-25 12:38:15.367494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2229110 with addr=10.0.0.2, port=4420 00:25:42.046 [2024-07-25 12:38:15.367501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2229110 is same with the state(5) to be set 00:25:42.046 [2024-07-25 12:38:15.367509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236810 (9): Bad file descriptor 00:25:42.046 [2024-07-25 12:38:15.367517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c610 (9): Bad file descriptor 00:25:42.046 [2024-07-25 12:38:15.367526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ee510 (9): Bad file descriptor 00:25:42.046 [2024-07-25 12:38:15.367538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:42.046 [2024-07-25 12:38:15.367545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:42.047 [2024-07-25 12:38:15.367556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:42.047 [2024-07-25 12:38:15.367565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:42.047 [2024-07-25 12:38:15.367572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:42.047 [2024-07-25 12:38:15.367578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:42.047 [2024-07-25 12:38:15.367586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:42.047 [2024-07-25 12:38:15.367593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:42.047 [2024-07-25 12:38:15.367599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:42.047 [2024-07-25 12:38:15.367607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:42.047 [2024-07-25 12:38:15.367615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:42.047 [2024-07-25 12:38:15.367622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:42.047 [2024-07-25 12:38:15.367654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.047 [2024-07-25 12:38:15.367661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.047 [2024-07-25 12:38:15.367668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.047 [2024-07-25 12:38:15.367676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.047 [2024-07-25 12:38:15.367683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224d4d0 (9): Bad file descriptor 00:25:42.047 [2024-07-25 12:38:15.367692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2229110 (9): Bad file descriptor 00:25:42.047 [2024-07-25 12:38:15.367699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:42.047 [2024-07-25 12:38:15.367704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:42.047 [2024-07-25 12:38:15.367711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:42.047 [2024-07-25 12:38:15.367720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:42.047 [2024-07-25 12:38:15.367727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:42.047 [2024-07-25 12:38:15.367734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:42.047 [2024-07-25 12:38:15.367743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:42.047 [2024-07-25 12:38:15.367748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:42.047 [2024-07-25 12:38:15.367755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:42.047 [2024-07-25 12:38:15.367792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.047 [2024-07-25 12:38:15.367802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.047 [2024-07-25 12:38:15.367808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.047 [2024-07-25 12:38:15.367814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:42.047 [2024-07-25 12:38:15.367822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:42.047 [2024-07-25 12:38:15.367830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:42.047 [2024-07-25 12:38:15.367840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.047 [2024-07-25 12:38:15.367846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.047 [2024-07-25 12:38:15.367852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.047 [2024-07-25 12:38:15.367878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.047 [2024-07-25 12:38:15.367886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.307 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:25:42.307 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 506063 00:25:43.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (506063) - No such process 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.694 rmmod nvme_tcp 00:25:43.694 rmmod nvme_fabrics 00:25:43.694 rmmod nvme_keyring 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.694 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.608 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:45.608 00:25:45.608 real 0m8.105s 00:25:45.608 user 0m19.808s 00:25:45.608 sys 0m1.557s 00:25:45.608 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.608 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:45.608 ************************************ 00:25:45.608 END TEST nvmf_shutdown_tc3 00:25:45.608 ************************************ 00:25:45.608 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:25:45.608 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:25:45.608 00:25:45.608 real 0m34.539s 00:25:45.608 user 1m17.780s 00:25:45.608 sys 0m11.480s 00:25:45.608 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.608 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:45.608 ************************************ 00:25:45.608 END TEST nvmf_shutdown 00:25:45.608 ************************************ 00:25:45.608 12:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:25:45.608 12:38:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:25:45.608 00:25:45.608 real 12m27.956s 00:25:45.608 user 26m19.241s 00:25:45.608 sys 3m38.017s 00:25:45.608 12:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.608 12:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:45.608 ************************************ 00:25:45.608 END TEST nvmf_target_extra 00:25:45.608 ************************************ 00:25:45.608 12:38:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:45.608 12:38:18 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:45.608 12:38:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:45.608 12:38:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.608 12:38:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.608 ************************************ 00:25:45.608 START TEST nvmf_host 00:25:45.608 ************************************ 00:25:45.608 12:38:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:45.869 * Looking for test storage... 00:25:45.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:45.869 12:38:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.869 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.870 ************************************ 00:25:45.870 START TEST nvmf_multicontroller 00:25:45.870 ************************************ 00:25:45.870 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:46.132 * Looking for test storage... 00:25:46.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:25:46.132 12:38:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:54.269 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:54.269 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.269 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:54.270 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:54.270 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:54.270 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:54.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:25:54.531 00:25:54.531 --- 10.0.0.2 ping statistics --- 00:25:54.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.531 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:25:54.531 00:25:54.531 --- 10.0.0.1 ping statistics --- 00:25:54.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.531 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=511252 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 511252 00:25:54.531 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:54.532 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 511252 ']' 00:25:54.532 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.532 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:54.532 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.532 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:54.532 12:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:54.532 [2024-07-25 12:38:27.833585] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:25:54.532 [2024-07-25 12:38:27.833656] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.532 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.532 [2024-07-25 12:38:27.923184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:54.793 [2024-07-25 12:38:28.030238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.793 [2024-07-25 12:38:28.030301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.793 [2024-07-25 12:38:28.030312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.793 [2024-07-25 12:38:28.030321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.793 [2024-07-25 12:38:28.030329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.793 [2024-07-25 12:38:28.030492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.793 [2024-07-25 12:38:28.030638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:54.793 [2024-07-25 12:38:28.030658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.365 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.365 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:25:55.365 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:55.365 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:55.365 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.365 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.365 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:55.365 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.365 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.365 [2024-07-25 12:38:28.771294] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.626 Malloc0 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.626 [2024-07-25 12:38:28.856765] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.626 [2024-07-25 12:38:28.868667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.626 Malloc1 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=511474 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 511474 /var/tmp/bdevperf.sock 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 511474 ']' 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:55.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:55.626 12:38:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:56.566 12:38:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.566 12:38:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:25:56.566 12:38:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:56.566 12:38:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.566 12:38:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 NVMe0n1 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.826 1 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 request: 00:25:56.826 { 00:25:56.826 "name": "NVMe0", 00:25:56.826 "trtype": "tcp", 00:25:56.826 "traddr": "10.0.0.2", 00:25:56.826 "adrfam": "ipv4", 00:25:56.826 "trsvcid": "4420", 00:25:56.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.826 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:56.826 "hostaddr": "10.0.0.2", 00:25:56.826 "hostsvcid": "60000", 00:25:56.826 "prchk_reftag": false, 00:25:56.826 "prchk_guard": false, 00:25:56.826 "hdgst": false, 00:25:56.826 "ddgst": false, 00:25:56.826 "method": "bdev_nvme_attach_controller", 00:25:56.826 "req_id": 1 00:25:56.826 } 00:25:56.826 Got JSON-RPC error response 00:25:56.826 response: 00:25:56.826 { 00:25:56.826 "code": -114, 00:25:56.826 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:56.826 } 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 request: 00:25:56.826 { 00:25:56.826 "name": "NVMe0", 00:25:56.826 "trtype": "tcp", 00:25:56.826 "traddr": "10.0.0.2", 00:25:56.826 "adrfam": "ipv4", 00:25:56.826 "trsvcid": "4420", 00:25:56.826 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:56.826 "hostaddr": "10.0.0.2", 00:25:56.826 "hostsvcid": "60000", 00:25:56.826 "prchk_reftag": false, 00:25:56.826 "prchk_guard": false, 00:25:56.826 "hdgst": false, 00:25:56.826 "ddgst": false, 00:25:56.826 "method": "bdev_nvme_attach_controller", 00:25:56.826 "req_id": 1 00:25:56.826 } 00:25:56.826 Got JSON-RPC error response 00:25:56.826 response: 00:25:56.826 { 00:25:56.826 "code": -114, 00:25:56.826 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:56.826 } 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 request: 00:25:56.826 { 00:25:56.826 "name": "NVMe0", 00:25:56.826 "trtype": "tcp", 00:25:56.826 "traddr": "10.0.0.2", 00:25:56.826 "adrfam": "ipv4", 00:25:56.826 "trsvcid": "4420", 00:25:56.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.826 "hostaddr": "10.0.0.2", 00:25:56.826 "hostsvcid": "60000", 00:25:56.826 "prchk_reftag": false, 00:25:56.826 "prchk_guard": false, 00:25:56.826 "hdgst": false, 00:25:56.826 "ddgst": false, 00:25:56.826 "multipath": "disable", 00:25:56.826 "method": "bdev_nvme_attach_controller", 00:25:56.826 "req_id": 1 00:25:56.826 } 00:25:56.826 Got JSON-RPC error response 00:25:56.826 response: 00:25:56.826 { 00:25:56.826 "code": -114, 00:25:56.826 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:56.826 } 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 request: 00:25:56.826 { 00:25:56.826 "name": "NVMe0", 00:25:56.826 "trtype": "tcp", 00:25:56.826 "traddr": "10.0.0.2", 00:25:56.826 "adrfam": "ipv4", 00:25:56.826 "trsvcid": "4420", 00:25:56.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.826 "hostaddr": "10.0.0.2", 00:25:56.826 "hostsvcid": "60000", 00:25:56.826 "prchk_reftag": false, 00:25:56.826 "prchk_guard": false, 00:25:56.826 "hdgst": false, 00:25:56.826 "ddgst": false, 00:25:56.826 "multipath": "failover", 00:25:56.826 "method": "bdev_nvme_attach_controller", 00:25:56.826 "req_id": 1 00:25:56.826 } 00:25:56.826 Got JSON-RPC error response 00:25:56.826 response: 00:25:56.826 { 00:25:56.826 "code": -114, 00:25:56.826 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:56.826 } 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.826 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:57.086 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:57.086 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:57.086 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:58.468 0 00:25:58.468 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:58.468 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.468 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 511474 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 511474 ']' 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 511474 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 511474 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 511474' 00:25:58.469 killing process with pid 511474 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 511474 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 511474 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:25:58.469 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:58.469 [2024-07-25 12:38:28.997075] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:25:58.469 [2024-07-25 12:38:28.997149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511474 ] 00:25:58.469 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.469 [2024-07-25 12:38:29.082479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.469 [2024-07-25 12:38:29.176873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.469 [2024-07-25 12:38:30.375831] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 33501146-0bc6-492b-9fe6-50ebc50c9b01 already exists 00:25:58.469 [2024-07-25 12:38:30.375875] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:33501146-0bc6-492b-9fe6-50ebc50c9b01 alias for bdev NVMe1n1 00:25:58.469 [2024-07-25 12:38:30.375885] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:58.469 Running I/O for 1 seconds... 00:25:58.469 00:25:58.469 Latency(us) 00:25:58.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.469 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:58.469 NVMe0n1 : 1.01 6782.49 26.49 0.00 0.00 18805.40 9175.04 27625.94 00:25:58.469 =================================================================================================================== 00:25:58.469 Total : 6782.49 26.49 0.00 0.00 18805.40 9175.04 27625.94 00:25:58.469 Received shutdown signal, test time was about 1.000000 seconds 00:25:58.469 00:25:58.469 Latency(us) 00:25:58.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.469 =================================================================================================================== 00:25:58.469 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.469 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.469 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.469 rmmod nvme_tcp 00:25:58.469 rmmod nvme_fabrics 00:25:58.469 rmmod nvme_keyring 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 511252 ']' 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 511252 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 511252 ']' 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 511252 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 511252 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 511252' 00:25:58.729 killing process with pid 511252 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 511252 00:25:58.729 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 511252 00:25:58.989 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:58.989 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:58.989 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:58.989 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:58.989 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:58.989 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.989 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.989 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.897 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:00.897 00:26:00.897 real 0m15.073s 00:26:00.897 user 0m17.964s 00:26:00.897 sys 0m7.155s 00:26:00.897 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:00.897 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:00.897 ************************************ 00:26:00.897 END TEST nvmf_multicontroller 00:26:00.897 ************************************ 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.158 ************************************ 00:26:01.158 START TEST nvmf_aer 00:26:01.158 ************************************ 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:01.158 * Looking for test storage... 00:26:01.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:01.158 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:26:01.159 12:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:09.293 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:09.293 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:09.293 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:09.293 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:09.293 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:09.294 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:09.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:26:09.555 00:26:09.555 --- 10.0.0.2 ping statistics --- 00:26:09.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.555 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:09.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:26:09.555 00:26:09.555 --- 10.0.0.1 ping statistics --- 00:26:09.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.555 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=516391 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 516391 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 516391 ']' 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.555 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:09.555 [2024-07-25 12:38:42.915874] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:26:09.555 [2024-07-25 12:38:42.915933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.555 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.815 [2024-07-25 12:38:43.009206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:09.815 [2024-07-25 12:38:43.104030] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.815 [2024-07-25 12:38:43.104099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.815 [2024-07-25 12:38:43.104107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.815 [2024-07-25 12:38:43.104113] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.815 [2024-07-25 12:38:43.104119] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.815 [2024-07-25 12:38:43.104265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.815 [2024-07-25 12:38:43.104419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.815 [2024-07-25 12:38:43.104596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:09.815 [2024-07-25 12:38:43.104598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.389 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:10.389 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:26:10.389 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:10.389 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:10.389 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.650 [2024-07-25 12:38:43.846438] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.650 Malloc0 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.650 [2024-07-25 12:38:43.916736] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.650 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.650 [ 00:26:10.650 { 00:26:10.650 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:10.650 "subtype": "Discovery", 00:26:10.650 "listen_addresses": [], 00:26:10.650 "allow_any_host": true, 00:26:10.650 "hosts": [] 00:26:10.650 }, 00:26:10.650 { 00:26:10.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:10.650 "subtype": "NVMe", 00:26:10.650 "listen_addresses": [ 00:26:10.650 { 00:26:10.650 "trtype": "TCP", 00:26:10.650 "adrfam": "IPv4", 00:26:10.650 "traddr": "10.0.0.2", 00:26:10.650 "trsvcid": "4420" 00:26:10.650 } 00:26:10.650 ], 00:26:10.650 "allow_any_host": true, 00:26:10.650 "hosts": [], 00:26:10.650 "serial_number": "SPDK00000000000001", 00:26:10.651 "model_number": "SPDK bdev Controller", 00:26:10.651 "max_namespaces": 2, 00:26:10.651 "min_cntlid": 1, 00:26:10.651 "max_cntlid": 65519, 00:26:10.651 "namespaces": [ 00:26:10.651 { 00:26:10.651 "nsid": 1, 00:26:10.651 "bdev_name": "Malloc0", 00:26:10.651 "name": "Malloc0", 00:26:10.651 "nguid": "53B429412DFE4933B1CCE9106C1FE137", 00:26:10.651 "uuid": "53b42941-2dfe-4933-b1cc-e9106c1fe137" 00:26:10.651 } 00:26:10.651 ] 00:26:10.651 } 00:26:10.651 ] 00:26:10.651 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.651 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:10.651 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:10.651 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=516446 00:26:10.651 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:10.651 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:26:10.651 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:10.651 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:10.651 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:26:10.651 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:26:10.651 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:10.651 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.651 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:10.651 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:26:10.651 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:26:10.651 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.912 Malloc1 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.912 Asynchronous Event Request test 00:26:10.912 Attaching to 10.0.0.2 00:26:10.912 Attached to 10.0.0.2 00:26:10.912 Registering asynchronous event callbacks... 00:26:10.912 Starting namespace attribute notice tests for all controllers... 00:26:10.912 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:10.912 aer_cb - Changed Namespace 00:26:10.912 Cleaning up... 00:26:10.912 [ 00:26:10.912 { 00:26:10.912 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:10.912 "subtype": "Discovery", 00:26:10.912 "listen_addresses": [], 00:26:10.912 "allow_any_host": true, 00:26:10.912 "hosts": [] 00:26:10.912 }, 00:26:10.912 { 00:26:10.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:10.912 "subtype": "NVMe", 00:26:10.912 "listen_addresses": [ 00:26:10.912 { 00:26:10.912 "trtype": "TCP", 00:26:10.912 "adrfam": "IPv4", 00:26:10.912 "traddr": "10.0.0.2", 00:26:10.912 "trsvcid": "4420" 00:26:10.912 } 00:26:10.912 ], 00:26:10.912 "allow_any_host": true, 00:26:10.912 "hosts": [], 00:26:10.912 "serial_number": "SPDK00000000000001", 00:26:10.912 "model_number": "SPDK bdev Controller", 00:26:10.912 "max_namespaces": 2, 00:26:10.912 "min_cntlid": 1, 00:26:10.912 "max_cntlid": 65519, 00:26:10.912 "namespaces": [ 00:26:10.912 { 00:26:10.912 "nsid": 1, 00:26:10.912 "bdev_name": "Malloc0", 00:26:10.912 "name": "Malloc0", 00:26:10.912 "nguid": "53B429412DFE4933B1CCE9106C1FE137", 00:26:10.912 "uuid": "53b42941-2dfe-4933-b1cc-e9106c1fe137" 00:26:10.912 }, 00:26:10.912 { 00:26:10.912 "nsid": 2, 00:26:10.912 "bdev_name": "Malloc1", 00:26:10.912 "name": "Malloc1", 00:26:10.912 "nguid": "7391EC1312BD406AA360AD1AE408589D", 00:26:10.912 "uuid": "7391ec13-12bd-406a-a360-ad1ae408589d" 00:26:10.912 } 00:26:10.912 ] 00:26:10.912 } 00:26:10.912 ] 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 516446 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:10.912 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:10.913 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:26:10.913 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:10.913 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:26:10.913 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:10.913 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:10.913 rmmod nvme_tcp 00:26:11.174 rmmod nvme_fabrics 00:26:11.174 rmmod nvme_keyring 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 516391 ']' 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 516391 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 516391 ']' 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 516391 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 516391 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 516391' 00:26:11.174 killing process with pid 516391 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 516391 00:26:11.174 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 516391 00:26:11.438 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:11.438 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:11.438 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:11.438 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.438 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:11.438 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.438 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.438 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.391 12:38:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:13.391 00:26:13.391 real 0m12.328s 00:26:13.391 user 0m8.416s 00:26:13.391 sys 0m6.744s 00:26:13.391 12:38:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:13.391 12:38:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:13.391 ************************************ 00:26:13.391 END TEST nvmf_aer 00:26:13.391 ************************************ 00:26:13.391 12:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:26:13.391 12:38:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:13.391 12:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:13.391 12:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.391 12:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.391 ************************************ 00:26:13.391 START TEST nvmf_async_init 00:26:13.391 ************************************ 00:26:13.391 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:13.652 * Looking for test storage... 00:26:13.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.652 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=12e47ff468bc410493a22a5247294270 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:26:13.653 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:21.798 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:21.798 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:21.798 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:21.799 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:21.799 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.799 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.799 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.799 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.799 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:21.799 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:22.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:26:22.061 00:26:22.061 --- 10.0.0.2 ping statistics --- 00:26:22.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.061 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:26:22.061 00:26:22.061 --- 10.0.0.1 ping statistics --- 00:26:22.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.061 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=520938 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 520938 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 520938 ']' 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.061 12:38:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.061 [2024-07-25 12:38:55.383200] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:26:22.061 [2024-07-25 12:38:55.383260] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.061 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.061 [2024-07-25 12:38:55.477675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.322 [2024-07-25 12:38:55.571205] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.322 [2024-07-25 12:38:55.571256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.322 [2024-07-25 12:38:55.571263] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.322 [2024-07-25 12:38:55.571270] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.322 [2024-07-25 12:38:55.571276] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.322 [2024-07-25 12:38:55.571305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.892 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.892 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:26:22.892 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:22.892 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:22.892 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.892 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.893 [2024-07-25 12:38:56.269841] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.893 null0 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 12e47ff468bc410493a22a5247294270 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.893 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.152 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.152 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.152 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.152 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.152 [2024-07-25 12:38:56.330229] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.152 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.152 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:23.152 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.152 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.412 nvme0n1 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.413 [ 00:26:23.413 { 00:26:23.413 "name": "nvme0n1", 00:26:23.413 "aliases": [ 00:26:23.413 "12e47ff4-68bc-4104-93a2-2a5247294270" 00:26:23.413 ], 00:26:23.413 "product_name": "NVMe disk", 00:26:23.413 "block_size": 512, 00:26:23.413 "num_blocks": 2097152, 00:26:23.413 "uuid": "12e47ff4-68bc-4104-93a2-2a5247294270", 00:26:23.413 "assigned_rate_limits": { 00:26:23.413 "rw_ios_per_sec": 0, 00:26:23.413 "rw_mbytes_per_sec": 0, 00:26:23.413 "r_mbytes_per_sec": 0, 00:26:23.413 "w_mbytes_per_sec": 0 00:26:23.413 }, 00:26:23.413 "claimed": false, 00:26:23.413 "zoned": false, 00:26:23.413 "supported_io_types": { 00:26:23.413 "read": true, 00:26:23.413 "write": true, 00:26:23.413 "unmap": false, 00:26:23.413 "flush": true, 00:26:23.413 "reset": true, 00:26:23.413 "nvme_admin": true, 00:26:23.413 "nvme_io": true, 00:26:23.413 "nvme_io_md": false, 00:26:23.413 "write_zeroes": true, 00:26:23.413 "zcopy": false, 00:26:23.413 "get_zone_info": false, 00:26:23.413 "zone_management": false, 00:26:23.413 "zone_append": false, 00:26:23.413 "compare": true, 00:26:23.413 "compare_and_write": true, 00:26:23.413 "abort": true, 00:26:23.413 "seek_hole": false, 00:26:23.413 "seek_data": false, 00:26:23.413 "copy": true, 00:26:23.413 "nvme_iov_md": false 00:26:23.413 }, 00:26:23.413 "memory_domains": [ 00:26:23.413 { 00:26:23.413 "dma_device_id": "system", 00:26:23.413 "dma_device_type": 1 00:26:23.413 } 00:26:23.413 ], 00:26:23.413 "driver_specific": { 00:26:23.413 "nvme": [ 00:26:23.413 { 00:26:23.413 "trid": { 00:26:23.413 "trtype": "TCP", 00:26:23.413 "adrfam": "IPv4", 00:26:23.413 "traddr": "10.0.0.2", 00:26:23.413 "trsvcid": "4420", 00:26:23.413 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:23.413 }, 00:26:23.413 "ctrlr_data": { 00:26:23.413 "cntlid": 1, 00:26:23.413 "vendor_id": "0x8086", 00:26:23.413 "model_number": "SPDK bdev Controller", 00:26:23.413 "serial_number": "00000000000000000000", 00:26:23.413 "firmware_revision": "24.09", 00:26:23.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.413 "oacs": { 00:26:23.413 "security": 0, 00:26:23.413 "format": 0, 00:26:23.413 "firmware": 0, 00:26:23.413 "ns_manage": 0 00:26:23.413 }, 00:26:23.413 "multi_ctrlr": true, 00:26:23.413 "ana_reporting": false 00:26:23.413 }, 00:26:23.413 "vs": { 00:26:23.413 "nvme_version": "1.3" 00:26:23.413 }, 00:26:23.413 "ns_data": { 00:26:23.413 "id": 1, 00:26:23.413 "can_share": true 00:26:23.413 } 00:26:23.413 } 00:26:23.413 ], 00:26:23.413 "mp_policy": "active_passive" 00:26:23.413 } 00:26:23.413 } 00:26:23.413 ] 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.413 [2024-07-25 12:38:56.602658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:23.413 [2024-07-25 12:38:56.602739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2ca30 (9): Bad file descriptor 00:26:23.413 [2024-07-25 12:38:56.775646] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.413 [ 00:26:23.413 { 00:26:23.413 "name": "nvme0n1", 00:26:23.413 "aliases": [ 00:26:23.413 "12e47ff4-68bc-4104-93a2-2a5247294270" 00:26:23.413 ], 00:26:23.413 "product_name": "NVMe disk", 00:26:23.413 "block_size": 512, 00:26:23.413 "num_blocks": 2097152, 00:26:23.413 "uuid": "12e47ff4-68bc-4104-93a2-2a5247294270", 00:26:23.413 "assigned_rate_limits": { 00:26:23.413 "rw_ios_per_sec": 0, 00:26:23.413 "rw_mbytes_per_sec": 0, 00:26:23.413 "r_mbytes_per_sec": 0, 00:26:23.413 "w_mbytes_per_sec": 0 00:26:23.413 }, 00:26:23.413 "claimed": false, 00:26:23.413 "zoned": false, 00:26:23.413 "supported_io_types": { 00:26:23.413 "read": true, 00:26:23.413 "write": true, 00:26:23.413 "unmap": false, 00:26:23.413 "flush": true, 00:26:23.413 "reset": true, 00:26:23.413 "nvme_admin": true, 00:26:23.413 "nvme_io": true, 00:26:23.413 "nvme_io_md": false, 00:26:23.413 "write_zeroes": true, 00:26:23.413 "zcopy": false, 00:26:23.413 "get_zone_info": false, 00:26:23.413 "zone_management": false, 00:26:23.413 "zone_append": false, 00:26:23.413 "compare": true, 00:26:23.413 "compare_and_write": true, 00:26:23.413 "abort": true, 00:26:23.413 "seek_hole": false, 00:26:23.413 "seek_data": false, 00:26:23.413 "copy": true, 00:26:23.413 "nvme_iov_md": false 00:26:23.413 }, 00:26:23.413 "memory_domains": [ 00:26:23.413 { 00:26:23.413 "dma_device_id": "system", 00:26:23.413 "dma_device_type": 1 00:26:23.413 } 00:26:23.413 ], 00:26:23.413 "driver_specific": { 00:26:23.413 "nvme": [ 00:26:23.413 { 00:26:23.413 "trid": { 00:26:23.413 "trtype": "TCP", 00:26:23.413 "adrfam": "IPv4", 00:26:23.413 "traddr": "10.0.0.2", 00:26:23.413 "trsvcid": "4420", 00:26:23.413 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:23.413 }, 00:26:23.413 "ctrlr_data": { 00:26:23.413 "cntlid": 2, 00:26:23.413 "vendor_id": "0x8086", 00:26:23.413 "model_number": "SPDK bdev Controller", 00:26:23.413 "serial_number": "00000000000000000000", 00:26:23.413 "firmware_revision": "24.09", 00:26:23.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.413 "oacs": { 00:26:23.413 "security": 0, 00:26:23.413 "format": 0, 00:26:23.413 "firmware": 0, 00:26:23.413 "ns_manage": 0 00:26:23.413 }, 00:26:23.413 "multi_ctrlr": true, 00:26:23.413 "ana_reporting": false 00:26:23.413 }, 00:26:23.413 "vs": { 00:26:23.413 "nvme_version": "1.3" 00:26:23.413 }, 00:26:23.413 "ns_data": { 00:26:23.413 "id": 1, 00:26:23.413 "can_share": true 00:26:23.413 } 00:26:23.413 } 00:26:23.413 ], 00:26:23.413 "mp_policy": "active_passive" 00:26:23.413 } 00:26:23.413 } 00:26:23.413 ] 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Meh4KsXIAm 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:23.413 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Meh4KsXIAm 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.674 [2024-07-25 12:38:56.851395] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:23.674 [2024-07-25 12:38:56.851569] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Meh4KsXIAm 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.674 [2024-07-25 12:38:56.863420] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Meh4KsXIAm 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.674 [2024-07-25 12:38:56.875463] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:23.674 [2024-07-25 12:38:56.875510] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:23.674 nvme0n1 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.674 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.675 [ 00:26:23.675 { 00:26:23.675 "name": "nvme0n1", 00:26:23.675 "aliases": [ 00:26:23.675 "12e47ff4-68bc-4104-93a2-2a5247294270" 00:26:23.675 ], 00:26:23.675 "product_name": "NVMe disk", 00:26:23.675 "block_size": 512, 00:26:23.675 "num_blocks": 2097152, 00:26:23.675 "uuid": "12e47ff4-68bc-4104-93a2-2a5247294270", 00:26:23.675 "assigned_rate_limits": { 00:26:23.675 "rw_ios_per_sec": 0, 00:26:23.675 "rw_mbytes_per_sec": 0, 00:26:23.675 "r_mbytes_per_sec": 0, 00:26:23.675 "w_mbytes_per_sec": 0 00:26:23.675 }, 00:26:23.675 "claimed": false, 00:26:23.675 "zoned": false, 00:26:23.675 "supported_io_types": { 00:26:23.675 "read": true, 00:26:23.675 "write": true, 00:26:23.675 "unmap": false, 00:26:23.675 "flush": true, 00:26:23.675 "reset": true, 00:26:23.675 "nvme_admin": true, 00:26:23.675 "nvme_io": true, 00:26:23.675 "nvme_io_md": false, 00:26:23.675 "write_zeroes": true, 00:26:23.675 "zcopy": false, 00:26:23.675 "get_zone_info": false, 00:26:23.675 "zone_management": false, 00:26:23.675 "zone_append": false, 00:26:23.675 "compare": true, 00:26:23.675 "compare_and_write": true, 00:26:23.675 "abort": true, 00:26:23.675 "seek_hole": false, 00:26:23.675 "seek_data": false, 00:26:23.675 "copy": true, 00:26:23.675 "nvme_iov_md": false 00:26:23.675 }, 00:26:23.675 "memory_domains": [ 00:26:23.675 { 00:26:23.675 "dma_device_id": "system", 00:26:23.675 "dma_device_type": 1 00:26:23.675 } 00:26:23.675 ], 00:26:23.675 "driver_specific": { 00:26:23.675 "nvme": [ 00:26:23.675 { 00:26:23.675 "trid": { 00:26:23.675 "trtype": "TCP", 00:26:23.675 "adrfam": "IPv4", 00:26:23.675 "traddr": "10.0.0.2", 00:26:23.675 "trsvcid": "4421", 00:26:23.675 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:23.675 }, 00:26:23.675 "ctrlr_data": { 00:26:23.675 "cntlid": 3, 00:26:23.675 "vendor_id": "0x8086", 00:26:23.675 "model_number": "SPDK bdev Controller", 00:26:23.675 "serial_number": "00000000000000000000", 00:26:23.675 "firmware_revision": "24.09", 00:26:23.675 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.675 "oacs": { 00:26:23.675 "security": 0, 00:26:23.675 "format": 0, 00:26:23.675 "firmware": 0, 00:26:23.675 "ns_manage": 0 00:26:23.675 }, 00:26:23.675 "multi_ctrlr": true, 00:26:23.675 "ana_reporting": false 00:26:23.675 }, 00:26:23.675 "vs": { 00:26:23.675 "nvme_version": "1.3" 00:26:23.675 }, 00:26:23.675 "ns_data": { 00:26:23.675 "id": 1, 00:26:23.675 "can_share": true 00:26:23.675 } 00:26:23.675 } 00:26:23.675 ], 00:26:23.675 "mp_policy": "active_passive" 00:26:23.675 } 00:26:23.675 } 00:26:23.675 ] 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Meh4KsXIAm 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.675 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.675 rmmod nvme_tcp 00:26:23.675 rmmod nvme_fabrics 00:26:23.675 rmmod nvme_keyring 00:26:23.675 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.675 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:26:23.675 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:26:23.675 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 520938 ']' 00:26:23.675 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 520938 00:26:23.675 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 520938 ']' 00:26:23.675 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 520938 00:26:23.675 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:26:23.675 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:23.675 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 520938 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 520938' 00:26:23.936 killing process with pid 520938 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 520938 00:26:23.936 [2024-07-25 12:38:57.121594] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:23.936 [2024-07-25 12:38:57.121633] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 520938 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.936 12:38:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.480 12:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:26.481 00:26:26.481 real 0m12.591s 00:26:26.481 user 0m4.430s 00:26:26.481 sys 0m6.694s 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.481 ************************************ 00:26:26.481 END TEST nvmf_async_init 00:26:26.481 ************************************ 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.481 ************************************ 00:26:26.481 START TEST dma 00:26:26.481 ************************************ 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:26.481 * Looking for test storage... 00:26:26.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:26.481 00:26:26.481 real 0m0.138s 00:26:26.481 user 0m0.054s 00:26:26.481 sys 0m0.094s 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:26.481 ************************************ 00:26:26.481 END TEST dma 00:26:26.481 ************************************ 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.481 ************************************ 00:26:26.481 START TEST nvmf_identify 00:26:26.481 ************************************ 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:26.481 * Looking for test storage... 00:26:26.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.481 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:26:26.482 12:38:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:34.615 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.615 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:34.616 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:34.616 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:34.616 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:34.616 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:34.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:26:34.877 00:26:34.877 --- 10.0.0.2 ping statistics --- 00:26:34.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.877 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:34.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:26:34.877 00:26:34.877 --- 10.0.0.1 ping statistics --- 00:26:34.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.877 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:34.877 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=525896 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 525896 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 525896 ']' 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.137 12:39:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:35.137 [2024-07-25 12:39:08.368693] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:26:35.137 [2024-07-25 12:39:08.368781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.137 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.137 [2024-07-25 12:39:08.476898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.398 [2024-07-25 12:39:08.571074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.398 [2024-07-25 12:39:08.571133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.398 [2024-07-25 12:39:08.571142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.398 [2024-07-25 12:39:08.571149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.398 [2024-07-25 12:39:08.571155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.398 [2024-07-25 12:39:08.571284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.398 [2024-07-25 12:39:08.571440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.398 [2024-07-25 12:39:08.571603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.398 [2024-07-25 12:39:08.571654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:35.988 [2024-07-25 12:39:09.251230] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:35.988 Malloc0 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:35.988 [2024-07-25 12:39:09.365169] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.988 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:35.988 [ 00:26:35.989 { 00:26:35.989 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:35.989 "subtype": "Discovery", 00:26:35.989 "listen_addresses": [ 00:26:35.989 { 00:26:35.989 "trtype": "TCP", 00:26:35.989 "adrfam": "IPv4", 00:26:35.989 "traddr": "10.0.0.2", 00:26:35.989 "trsvcid": "4420" 00:26:35.989 } 00:26:35.989 ], 00:26:35.989 "allow_any_host": true, 00:26:35.989 "hosts": [] 00:26:35.989 }, 00:26:35.989 { 00:26:35.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.989 "subtype": "NVMe", 00:26:35.989 "listen_addresses": [ 00:26:35.989 { 00:26:35.989 "trtype": "TCP", 00:26:35.989 "adrfam": "IPv4", 00:26:35.989 "traddr": "10.0.0.2", 00:26:35.989 "trsvcid": "4420" 00:26:35.989 } 00:26:35.989 ], 00:26:35.989 "allow_any_host": true, 00:26:35.989 "hosts": [], 00:26:35.989 "serial_number": "SPDK00000000000001", 00:26:35.989 "model_number": "SPDK bdev Controller", 00:26:35.989 "max_namespaces": 32, 00:26:35.989 "min_cntlid": 1, 00:26:35.989 "max_cntlid": 65519, 00:26:35.989 "namespaces": [ 00:26:35.989 { 00:26:35.989 "nsid": 1, 00:26:35.989 "bdev_name": "Malloc0", 00:26:35.989 "name": "Malloc0", 00:26:35.989 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:35.989 "eui64": "ABCDEF0123456789", 00:26:35.989 "uuid": "c3317fea-93e3-4081-aaca-2d0d4200fe9d" 00:26:35.989 } 00:26:35.989 ] 00:26:35.989 } 00:26:35.989 ] 00:26:35.989 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.989 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:36.253 [2024-07-25 12:39:09.428299] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:26:36.253 [2024-07-25 12:39:09.428357] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526073 ] 00:26:36.253 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.253 [2024-07-25 12:39:09.461673] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:36.253 [2024-07-25 12:39:09.461730] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:36.253 [2024-07-25 12:39:09.461735] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:36.253 [2024-07-25 12:39:09.461752] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:36.253 [2024-07-25 12:39:09.461762] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:36.253 [2024-07-25 12:39:09.465591] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:36.253 [2024-07-25 12:39:09.465627] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4f9ec0 0 00:26:36.253 [2024-07-25 12:39:09.473558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:36.253 [2024-07-25 12:39:09.473574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:36.253 [2024-07-25 12:39:09.473581] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:36.253 [2024-07-25 12:39:09.473584] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:36.253 [2024-07-25 12:39:09.473634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.253 [2024-07-25 12:39:09.473640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.253 [2024-07-25 12:39:09.473645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f9ec0) 00:26:36.253 [2024-07-25 12:39:09.473660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:36.253 [2024-07-25 12:39:09.473678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ce40, cid 0, qid 0 00:26:36.253 [2024-07-25 12:39:09.484562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.253 [2024-07-25 12:39:09.484574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.253 [2024-07-25 12:39:09.484578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.253 [2024-07-25 12:39:09.484583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57ce40) on tqpair=0x4f9ec0 00:26:36.253 [2024-07-25 12:39:09.484597] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:36.253 [2024-07-25 12:39:09.484604] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:36.253 [2024-07-25 12:39:09.484609] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:36.253 [2024-07-25 12:39:09.484626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.253 [2024-07-25 12:39:09.484630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.253 [2024-07-25 12:39:09.484638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f9ec0) 00:26:36.253 [2024-07-25 12:39:09.484647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.253 [2024-07-25 12:39:09.484662] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ce40, cid 0, qid 0 00:26:36.253 [2024-07-25 12:39:09.484896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.253 [2024-07-25 12:39:09.484903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.253 [2024-07-25 12:39:09.484906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.253 [2024-07-25 12:39:09.484910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57ce40) on tqpair=0x4f9ec0 00:26:36.253 [2024-07-25 12:39:09.484919] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:36.253 [2024-07-25 12:39:09.484926] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:36.253 [2024-07-25 12:39:09.484933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.253 [2024-07-25 12:39:09.484936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.253 [2024-07-25 12:39:09.484939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f9ec0) 00:26:36.253 [2024-07-25 12:39:09.484946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.253 [2024-07-25 12:39:09.484956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ce40, cid 0, qid 0 00:26:36.253 [2024-07-25 12:39:09.485151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.253 [2024-07-25 12:39:09.485158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.253 [2024-07-25 12:39:09.485161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.253 [2024-07-25 12:39:09.485164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57ce40) on tqpair=0x4f9ec0 00:26:36.253 [2024-07-25 12:39:09.485170] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:36.253 [2024-07-25 12:39:09.485177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:36.253 [2024-07-25 12:39:09.485183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.253 [2024-07-25 12:39:09.485187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.253 [2024-07-25 12:39:09.485190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f9ec0) 00:26:36.254 [2024-07-25 12:39:09.485196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.254 [2024-07-25 12:39:09.485205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ce40, cid 0, qid 0 00:26:36.254 [2024-07-25 12:39:09.485401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.254 [2024-07-25 12:39:09.485408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.254 [2024-07-25 12:39:09.485411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.485415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57ce40) on tqpair=0x4f9ec0 00:26:36.254 [2024-07-25 12:39:09.485420] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:36.254 [2024-07-25 12:39:09.485428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.485432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.485435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f9ec0) 00:26:36.254 [2024-07-25 12:39:09.485441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.254 [2024-07-25 12:39:09.485453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ce40, cid 0, qid 0 00:26:36.254 [2024-07-25 12:39:09.485644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.254 [2024-07-25 12:39:09.485652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.254 [2024-07-25 12:39:09.485655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.485658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57ce40) on tqpair=0x4f9ec0 00:26:36.254 [2024-07-25 12:39:09.485664] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:36.254 [2024-07-25 12:39:09.485669] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:36.254 [2024-07-25 12:39:09.485676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:36.254 [2024-07-25 12:39:09.485781] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:36.254 [2024-07-25 12:39:09.485786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:36.254 [2024-07-25 12:39:09.485795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.485799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.485802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f9ec0) 00:26:36.254 [2024-07-25 12:39:09.485808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.254 [2024-07-25 12:39:09.485818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ce40, cid 0, qid 0 00:26:36.254 [2024-07-25 12:39:09.486008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.254 [2024-07-25 12:39:09.486015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.254 [2024-07-25 12:39:09.486018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57ce40) on tqpair=0x4f9ec0 00:26:36.254 [2024-07-25 12:39:09.486026] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:36.254 [2024-07-25 12:39:09.486034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f9ec0) 00:26:36.254 [2024-07-25 12:39:09.486047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.254 [2024-07-25 12:39:09.486056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ce40, cid 0, qid 0 00:26:36.254 [2024-07-25 12:39:09.486250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.254 [2024-07-25 12:39:09.486256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.254 [2024-07-25 12:39:09.486260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57ce40) on tqpair=0x4f9ec0 00:26:36.254 [2024-07-25 12:39:09.486267] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:36.254 [2024-07-25 12:39:09.486272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:36.254 [2024-07-25 12:39:09.486279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:36.254 [2024-07-25 12:39:09.486289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:36.254 [2024-07-25 12:39:09.486299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f9ec0) 00:26:36.254 [2024-07-25 12:39:09.486309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.254 [2024-07-25 12:39:09.486318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ce40, cid 0, qid 0 00:26:36.254 [2024-07-25 12:39:09.486544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.254 [2024-07-25 12:39:09.486559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.254 [2024-07-25 12:39:09.486563] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486567] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4f9ec0): datao=0, datal=4096, cccid=0 00:26:36.254 [2024-07-25 12:39:09.486571] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x57ce40) on tqpair(0x4f9ec0): expected_datao=0, payload_size=4096 00:26:36.254 [2024-07-25 12:39:09.486575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486583] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486588] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.254 [2024-07-25 12:39:09.486715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.254 [2024-07-25 12:39:09.486718] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57ce40) on tqpair=0x4f9ec0 00:26:36.254 [2024-07-25 12:39:09.486730] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:36.254 [2024-07-25 12:39:09.486735] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:36.254 [2024-07-25 12:39:09.486739] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:36.254 [2024-07-25 12:39:09.486745] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:36.254 [2024-07-25 12:39:09.486749] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:36.254 [2024-07-25 12:39:09.486754] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:36.254 [2024-07-25 12:39:09.486762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:36.254 [2024-07-25 12:39:09.486772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.486779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f9ec0) 00:26:36.254 [2024-07-25 12:39:09.486786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:36.254 [2024-07-25 12:39:09.486796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ce40, cid 0, qid 0 00:26:36.254 [2024-07-25 12:39:09.486995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.254 [2024-07-25 12:39:09.487002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.254 [2024-07-25 12:39:09.487005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.487008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57ce40) on tqpair=0x4f9ec0 00:26:36.254 [2024-07-25 12:39:09.487019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.487023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.487026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f9ec0) 00:26:36.254 [2024-07-25 12:39:09.487032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.254 [2024-07-25 12:39:09.487038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.487041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.487044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4f9ec0) 00:26:36.254 [2024-07-25 12:39:09.487050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.254 [2024-07-25 12:39:09.487055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.487059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.487062] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4f9ec0) 00:26:36.254 [2024-07-25 12:39:09.487067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.254 [2024-07-25 12:39:09.487073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.487076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.254 [2024-07-25 12:39:09.487080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.254 [2024-07-25 12:39:09.487085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.255 [2024-07-25 12:39:09.487090] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:36.255 [2024-07-25 12:39:09.487102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:36.255 [2024-07-25 12:39:09.487109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.487113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4f9ec0) 00:26:36.255 [2024-07-25 12:39:09.487120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.255 [2024-07-25 12:39:09.487133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ce40, cid 0, qid 0 00:26:36.255 [2024-07-25 12:39:09.487138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57cfc0, cid 1, qid 0 00:26:36.255 [2024-07-25 12:39:09.487142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d140, cid 2, qid 0 00:26:36.255 [2024-07-25 12:39:09.487147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.255 [2024-07-25 12:39:09.487152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d440, cid 4, qid 0 00:26:36.255 [2024-07-25 12:39:09.487402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.255 [2024-07-25 12:39:09.487409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.255 [2024-07-25 12:39:09.487412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.487415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d440) on tqpair=0x4f9ec0 00:26:36.255 [2024-07-25 12:39:09.487422] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:36.255 [2024-07-25 12:39:09.487427] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:36.255 [2024-07-25 12:39:09.487437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.487440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4f9ec0) 00:26:36.255 [2024-07-25 12:39:09.487449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.255 [2024-07-25 12:39:09.487458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d440, cid 4, qid 0 00:26:36.255 [2024-07-25 12:39:09.487675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.255 [2024-07-25 12:39:09.487683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.255 [2024-07-25 12:39:09.487688] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.487691] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4f9ec0): datao=0, datal=4096, cccid=4 00:26:36.255 [2024-07-25 12:39:09.487695] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x57d440) on tqpair(0x4f9ec0): expected_datao=0, payload_size=4096 00:26:36.255 [2024-07-25 12:39:09.487700] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.487707] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.487712] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.530558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.255 [2024-07-25 12:39:09.530570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.255 [2024-07-25 12:39:09.530573] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.530577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d440) on tqpair=0x4f9ec0 00:26:36.255 [2024-07-25 12:39:09.530591] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:36.255 [2024-07-25 12:39:09.530619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.530623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4f9ec0) 00:26:36.255 [2024-07-25 12:39:09.530630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.255 [2024-07-25 12:39:09.530638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.530641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.530646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4f9ec0) 00:26:36.255 [2024-07-25 12:39:09.530653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.255 [2024-07-25 12:39:09.530671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d440, cid 4, qid 0 00:26:36.255 [2024-07-25 12:39:09.530676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d5c0, cid 5, qid 0 00:26:36.255 [2024-07-25 12:39:09.530940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.255 [2024-07-25 12:39:09.530947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.255 [2024-07-25 12:39:09.530950] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.530953] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4f9ec0): datao=0, datal=1024, cccid=4 00:26:36.255 [2024-07-25 12:39:09.530957] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x57d440) on tqpair(0x4f9ec0): expected_datao=0, payload_size=1024 00:26:36.255 [2024-07-25 12:39:09.530962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.530968] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.530971] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.530977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.255 [2024-07-25 12:39:09.530982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.255 [2024-07-25 12:39:09.530985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.530992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d5c0) on tqpair=0x4f9ec0 00:26:36.255 [2024-07-25 12:39:09.572744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.255 [2024-07-25 12:39:09.572756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.255 [2024-07-25 12:39:09.572759] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.572763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d440) on tqpair=0x4f9ec0 00:26:36.255 [2024-07-25 12:39:09.572784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.572788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4f9ec0) 00:26:36.255 [2024-07-25 12:39:09.572794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.255 [2024-07-25 12:39:09.572810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d440, cid 4, qid 0 00:26:36.255 [2024-07-25 12:39:09.573003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.255 [2024-07-25 12:39:09.573009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.255 [2024-07-25 12:39:09.573013] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.573016] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4f9ec0): datao=0, datal=3072, cccid=4 00:26:36.255 [2024-07-25 12:39:09.573020] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x57d440) on tqpair(0x4f9ec0): expected_datao=0, payload_size=3072 00:26:36.255 [2024-07-25 12:39:09.573024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.573030] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.573034] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.573153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.255 [2024-07-25 12:39:09.573159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.255 [2024-07-25 12:39:09.573163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.573166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d440) on tqpair=0x4f9ec0 00:26:36.255 [2024-07-25 12:39:09.573174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.573177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4f9ec0) 00:26:36.255 [2024-07-25 12:39:09.573183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.255 [2024-07-25 12:39:09.573196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d440, cid 4, qid 0 00:26:36.255 [2024-07-25 12:39:09.573445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.255 [2024-07-25 12:39:09.573451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.255 [2024-07-25 12:39:09.573454] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.573457] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4f9ec0): datao=0, datal=8, cccid=4 00:26:36.255 [2024-07-25 12:39:09.573461] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x57d440) on tqpair(0x4f9ec0): expected_datao=0, payload_size=8 00:26:36.255 [2024-07-25 12:39:09.573465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.573471] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.573474] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.616562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.255 [2024-07-25 12:39:09.616574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.255 [2024-07-25 12:39:09.616578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.255 [2024-07-25 12:39:09.616581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d440) on tqpair=0x4f9ec0 00:26:36.255 ===================================================== 00:26:36.255 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:36.255 ===================================================== 00:26:36.255 Controller Capabilities/Features 00:26:36.255 ================================ 00:26:36.255 Vendor ID: 0000 00:26:36.255 Subsystem Vendor ID: 0000 00:26:36.255 Serial Number: .................... 00:26:36.255 Model Number: ........................................ 00:26:36.255 Firmware Version: 24.09 00:26:36.255 Recommended Arb Burst: 0 00:26:36.255 IEEE OUI Identifier: 00 00 00 00:26:36.255 Multi-path I/O 00:26:36.255 May have multiple subsystem ports: No 00:26:36.256 May have multiple controllers: No 00:26:36.256 Associated with SR-IOV VF: No 00:26:36.256 Max Data Transfer Size: 131072 00:26:36.256 Max Number of Namespaces: 0 00:26:36.256 Max Number of I/O Queues: 1024 00:26:36.256 NVMe Specification Version (VS): 1.3 00:26:36.256 NVMe Specification Version (Identify): 1.3 00:26:36.256 Maximum Queue Entries: 128 00:26:36.256 Contiguous Queues Required: Yes 00:26:36.256 Arbitration Mechanisms Supported 00:26:36.256 Weighted Round Robin: Not Supported 00:26:36.256 Vendor Specific: Not Supported 00:26:36.256 Reset Timeout: 15000 ms 00:26:36.256 Doorbell Stride: 4 bytes 00:26:36.256 NVM Subsystem Reset: Not Supported 00:26:36.256 Command Sets Supported 00:26:36.256 NVM Command Set: Supported 00:26:36.256 Boot Partition: Not Supported 00:26:36.256 Memory Page Size Minimum: 4096 bytes 00:26:36.256 Memory Page Size Maximum: 4096 bytes 00:26:36.256 Persistent Memory Region: Not Supported 00:26:36.256 Optional Asynchronous Events Supported 00:26:36.256 Namespace Attribute Notices: Not Supported 00:26:36.256 Firmware Activation Notices: Not Supported 00:26:36.256 ANA Change Notices: Not Supported 00:26:36.256 PLE Aggregate Log Change Notices: Not Supported 00:26:36.256 LBA Status Info Alert Notices: Not Supported 00:26:36.256 EGE Aggregate Log Change Notices: Not Supported 00:26:36.256 Normal NVM Subsystem Shutdown event: Not Supported 00:26:36.256 Zone Descriptor Change Notices: Not Supported 00:26:36.256 Discovery Log Change Notices: Supported 00:26:36.256 Controller Attributes 00:26:36.256 128-bit Host Identifier: Not Supported 00:26:36.256 Non-Operational Permissive Mode: Not Supported 00:26:36.256 NVM Sets: Not Supported 00:26:36.256 Read Recovery Levels: Not Supported 00:26:36.256 Endurance Groups: Not Supported 00:26:36.256 Predictable Latency Mode: Not Supported 00:26:36.256 Traffic Based Keep ALive: Not Supported 00:26:36.256 Namespace Granularity: Not Supported 00:26:36.256 SQ Associations: Not Supported 00:26:36.256 UUID List: Not Supported 00:26:36.256 Multi-Domain Subsystem: Not Supported 00:26:36.256 Fixed Capacity Management: Not Supported 00:26:36.256 Variable Capacity Management: Not Supported 00:26:36.256 Delete Endurance Group: Not Supported 00:26:36.256 Delete NVM Set: Not Supported 00:26:36.256 Extended LBA Formats Supported: Not Supported 00:26:36.256 Flexible Data Placement Supported: Not Supported 00:26:36.256 00:26:36.256 Controller Memory Buffer Support 00:26:36.256 ================================ 00:26:36.256 Supported: No 00:26:36.256 00:26:36.256 Persistent Memory Region Support 00:26:36.256 ================================ 00:26:36.256 Supported: No 00:26:36.256 00:26:36.256 Admin Command Set Attributes 00:26:36.256 ============================ 00:26:36.256 Security Send/Receive: Not Supported 00:26:36.256 Format NVM: Not Supported 00:26:36.256 Firmware Activate/Download: Not Supported 00:26:36.256 Namespace Management: Not Supported 00:26:36.256 Device Self-Test: Not Supported 00:26:36.256 Directives: Not Supported 00:26:36.256 NVMe-MI: Not Supported 00:26:36.256 Virtualization Management: Not Supported 00:26:36.256 Doorbell Buffer Config: Not Supported 00:26:36.256 Get LBA Status Capability: Not Supported 00:26:36.256 Command & Feature Lockdown Capability: Not Supported 00:26:36.256 Abort Command Limit: 1 00:26:36.256 Async Event Request Limit: 4 00:26:36.256 Number of Firmware Slots: N/A 00:26:36.256 Firmware Slot 1 Read-Only: N/A 00:26:36.256 Firmware Activation Without Reset: N/A 00:26:36.256 Multiple Update Detection Support: N/A 00:26:36.256 Firmware Update Granularity: No Information Provided 00:26:36.256 Per-Namespace SMART Log: No 00:26:36.256 Asymmetric Namespace Access Log Page: Not Supported 00:26:36.256 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:36.256 Command Effects Log Page: Not Supported 00:26:36.256 Get Log Page Extended Data: Supported 00:26:36.256 Telemetry Log Pages: Not Supported 00:26:36.256 Persistent Event Log Pages: Not Supported 00:26:36.256 Supported Log Pages Log Page: May Support 00:26:36.256 Commands Supported & Effects Log Page: Not Supported 00:26:36.256 Feature Identifiers & Effects Log Page:May Support 00:26:36.256 NVMe-MI Commands & Effects Log Page: May Support 00:26:36.256 Data Area 4 for Telemetry Log: Not Supported 00:26:36.256 Error Log Page Entries Supported: 128 00:26:36.256 Keep Alive: Not Supported 00:26:36.256 00:26:36.256 NVM Command Set Attributes 00:26:36.256 ========================== 00:26:36.256 Submission Queue Entry Size 00:26:36.256 Max: 1 00:26:36.256 Min: 1 00:26:36.256 Completion Queue Entry Size 00:26:36.256 Max: 1 00:26:36.256 Min: 1 00:26:36.256 Number of Namespaces: 0 00:26:36.256 Compare Command: Not Supported 00:26:36.256 Write Uncorrectable Command: Not Supported 00:26:36.256 Dataset Management Command: Not Supported 00:26:36.256 Write Zeroes Command: Not Supported 00:26:36.256 Set Features Save Field: Not Supported 00:26:36.256 Reservations: Not Supported 00:26:36.256 Timestamp: Not Supported 00:26:36.256 Copy: Not Supported 00:26:36.256 Volatile Write Cache: Not Present 00:26:36.256 Atomic Write Unit (Normal): 1 00:26:36.256 Atomic Write Unit (PFail): 1 00:26:36.256 Atomic Compare & Write Unit: 1 00:26:36.256 Fused Compare & Write: Supported 00:26:36.256 Scatter-Gather List 00:26:36.256 SGL Command Set: Supported 00:26:36.256 SGL Keyed: Supported 00:26:36.256 SGL Bit Bucket Descriptor: Not Supported 00:26:36.256 SGL Metadata Pointer: Not Supported 00:26:36.256 Oversized SGL: Not Supported 00:26:36.256 SGL Metadata Address: Not Supported 00:26:36.256 SGL Offset: Supported 00:26:36.256 Transport SGL Data Block: Not Supported 00:26:36.256 Replay Protected Memory Block: Not Supported 00:26:36.256 00:26:36.256 Firmware Slot Information 00:26:36.256 ========================= 00:26:36.256 Active slot: 0 00:26:36.256 00:26:36.256 00:26:36.256 Error Log 00:26:36.256 ========= 00:26:36.256 00:26:36.256 Active Namespaces 00:26:36.256 ================= 00:26:36.256 Discovery Log Page 00:26:36.256 ================== 00:26:36.256 Generation Counter: 2 00:26:36.256 Number of Records: 2 00:26:36.256 Record Format: 0 00:26:36.256 00:26:36.256 Discovery Log Entry 0 00:26:36.256 ---------------------- 00:26:36.256 Transport Type: 3 (TCP) 00:26:36.256 Address Family: 1 (IPv4) 00:26:36.256 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:36.256 Entry Flags: 00:26:36.256 Duplicate Returned Information: 1 00:26:36.256 Explicit Persistent Connection Support for Discovery: 1 00:26:36.256 Transport Requirements: 00:26:36.256 Secure Channel: Not Required 00:26:36.256 Port ID: 0 (0x0000) 00:26:36.256 Controller ID: 65535 (0xffff) 00:26:36.256 Admin Max SQ Size: 128 00:26:36.256 Transport Service Identifier: 4420 00:26:36.256 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:36.256 Transport Address: 10.0.0.2 00:26:36.256 Discovery Log Entry 1 00:26:36.256 ---------------------- 00:26:36.256 Transport Type: 3 (TCP) 00:26:36.256 Address Family: 1 (IPv4) 00:26:36.256 Subsystem Type: 2 (NVM Subsystem) 00:26:36.256 Entry Flags: 00:26:36.256 Duplicate Returned Information: 0 00:26:36.256 Explicit Persistent Connection Support for Discovery: 0 00:26:36.256 Transport Requirements: 00:26:36.256 Secure Channel: Not Required 00:26:36.256 Port ID: 0 (0x0000) 00:26:36.256 Controller ID: 65535 (0xffff) 00:26:36.256 Admin Max SQ Size: 128 00:26:36.256 Transport Service Identifier: 4420 00:26:36.256 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:36.256 Transport Address: 10.0.0.2 [2024-07-25 12:39:09.616673] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:36.256 [2024-07-25 12:39:09.616684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57ce40) on tqpair=0x4f9ec0 00:26:36.256 [2024-07-25 12:39:09.616692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.256 [2024-07-25 12:39:09.616697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57cfc0) on tqpair=0x4f9ec0 00:26:36.256 [2024-07-25 12:39:09.616701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.257 [2024-07-25 12:39:09.616705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d140) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.616710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.257 [2024-07-25 12:39:09.616714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.616718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.257 [2024-07-25 12:39:09.616730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.616734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.616737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.257 [2024-07-25 12:39:09.616744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.257 [2024-07-25 12:39:09.616759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.257 [2024-07-25 12:39:09.616982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.257 [2024-07-25 12:39:09.616990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.257 [2024-07-25 12:39:09.616994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.616998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.617005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.257 [2024-07-25 12:39:09.617018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.257 [2024-07-25 12:39:09.617033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.257 [2024-07-25 12:39:09.617245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.257 [2024-07-25 12:39:09.617254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.257 [2024-07-25 12:39:09.617257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.617267] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:36.257 [2024-07-25 12:39:09.617271] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:36.257 [2024-07-25 12:39:09.617280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.257 [2024-07-25 12:39:09.617293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.257 [2024-07-25 12:39:09.617302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.257 [2024-07-25 12:39:09.617492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.257 [2024-07-25 12:39:09.617498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.257 [2024-07-25 12:39:09.617501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.617515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.257 [2024-07-25 12:39:09.617533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.257 [2024-07-25 12:39:09.617542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.257 [2024-07-25 12:39:09.617742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.257 [2024-07-25 12:39:09.617750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.257 [2024-07-25 12:39:09.617753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.617765] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617769] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.257 [2024-07-25 12:39:09.617778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.257 [2024-07-25 12:39:09.617787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.257 [2024-07-25 12:39:09.617967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.257 [2024-07-25 12:39:09.617973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.257 [2024-07-25 12:39:09.617976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.617988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.617995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.257 [2024-07-25 12:39:09.618001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.257 [2024-07-25 12:39:09.618010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.257 [2024-07-25 12:39:09.618187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.257 [2024-07-25 12:39:09.618193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.257 [2024-07-25 12:39:09.618196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.618208] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.257 [2024-07-25 12:39:09.618221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.257 [2024-07-25 12:39:09.618231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.257 [2024-07-25 12:39:09.618414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.257 [2024-07-25 12:39:09.618422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.257 [2024-07-25 12:39:09.618426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.618438] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.257 [2024-07-25 12:39:09.618451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.257 [2024-07-25 12:39:09.618460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.257 [2024-07-25 12:39:09.618675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.257 [2024-07-25 12:39:09.618682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.257 [2024-07-25 12:39:09.618685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.618697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.257 [2024-07-25 12:39:09.618710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.257 [2024-07-25 12:39:09.618719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.257 [2024-07-25 12:39:09.618905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.257 [2024-07-25 12:39:09.618911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.257 [2024-07-25 12:39:09.618915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.618926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.618933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.257 [2024-07-25 12:39:09.618939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.257 [2024-07-25 12:39:09.618948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.257 [2024-07-25 12:39:09.619141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.257 [2024-07-25 12:39:09.619147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.257 [2024-07-25 12:39:09.619150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.619153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.257 [2024-07-25 12:39:09.619162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.619165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.257 [2024-07-25 12:39:09.619168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.258 [2024-07-25 12:39:09.619175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.258 [2024-07-25 12:39:09.619184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.258 [2024-07-25 12:39:09.619376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.258 [2024-07-25 12:39:09.619382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.258 [2024-07-25 12:39:09.619387] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.619391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.258 [2024-07-25 12:39:09.619400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.619403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.619407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.258 [2024-07-25 12:39:09.619413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.258 [2024-07-25 12:39:09.619422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.258 [2024-07-25 12:39:09.619633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.258 [2024-07-25 12:39:09.619639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.258 [2024-07-25 12:39:09.619642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.619646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.258 [2024-07-25 12:39:09.619655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.619658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.619661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.258 [2024-07-25 12:39:09.619667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.258 [2024-07-25 12:39:09.619677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.258 [2024-07-25 12:39:09.619892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.258 [2024-07-25 12:39:09.619900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.258 [2024-07-25 12:39:09.619903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.619908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.258 [2024-07-25 12:39:09.619916] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.619920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.619923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.258 [2024-07-25 12:39:09.619929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.258 [2024-07-25 12:39:09.619938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.258 [2024-07-25 12:39:09.620116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.258 [2024-07-25 12:39:09.620122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.258 [2024-07-25 12:39:09.620125] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.620129] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.258 [2024-07-25 12:39:09.620138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.620141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.620144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.258 [2024-07-25 12:39:09.620150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.258 [2024-07-25 12:39:09.620159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.258 [2024-07-25 12:39:09.620344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.258 [2024-07-25 12:39:09.620350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.258 [2024-07-25 12:39:09.620353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.620357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.258 [2024-07-25 12:39:09.620368] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.620372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.620375] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.258 [2024-07-25 12:39:09.620381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.258 [2024-07-25 12:39:09.620390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.258 [2024-07-25 12:39:09.624593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.258 [2024-07-25 12:39:09.624604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.258 [2024-07-25 12:39:09.624608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.624611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.258 [2024-07-25 12:39:09.624621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.624625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.624628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f9ec0) 00:26:36.258 [2024-07-25 12:39:09.624634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.258 [2024-07-25 12:39:09.624646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57d2c0, cid 3, qid 0 00:26:36.258 [2024-07-25 12:39:09.624828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.258 [2024-07-25 12:39:09.624834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.258 [2024-07-25 12:39:09.624837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.258 [2024-07-25 12:39:09.624841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x57d2c0) on tqpair=0x4f9ec0 00:26:36.258 [2024-07-25 12:39:09.624848] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:26:36.258 00:26:36.258 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:36.258 [2024-07-25 12:39:09.669005] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:26:36.258 [2024-07-25 12:39:09.669068] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526268 ] 00:26:36.520 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.520 [2024-07-25 12:39:09.706566] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:36.520 [2024-07-25 12:39:09.706613] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:36.520 [2024-07-25 12:39:09.706618] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:36.520 [2024-07-25 12:39:09.706631] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:36.520 [2024-07-25 12:39:09.706640] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:36.520 [2024-07-25 12:39:09.706984] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:36.520 [2024-07-25 12:39:09.707012] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x124dec0 0 00:26:36.520 [2024-07-25 12:39:09.717562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:36.520 [2024-07-25 12:39:09.717577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:36.520 [2024-07-25 12:39:09.717581] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:36.520 [2024-07-25 12:39:09.717584] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:36.520 [2024-07-25 12:39:09.717623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.520 [2024-07-25 12:39:09.717629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.520 [2024-07-25 12:39:09.717633] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124dec0) 00:26:36.520 [2024-07-25 12:39:09.717645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:36.520 [2024-07-25 12:39:09.717662] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0e40, cid 0, qid 0 00:26:36.520 [2024-07-25 12:39:09.724560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.520 [2024-07-25 12:39:09.724571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.520 [2024-07-25 12:39:09.724575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.520 [2024-07-25 12:39:09.724579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d0e40) on tqpair=0x124dec0 00:26:36.520 [2024-07-25 12:39:09.724592] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:36.520 [2024-07-25 12:39:09.724598] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:36.520 [2024-07-25 12:39:09.724603] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:36.520 [2024-07-25 12:39:09.724617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.520 [2024-07-25 12:39:09.724621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.520 [2024-07-25 12:39:09.724624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124dec0) 00:26:36.520 [2024-07-25 12:39:09.724632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.521 [2024-07-25 12:39:09.724647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0e40, cid 0, qid 0 00:26:36.521 [2024-07-25 12:39:09.724934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.521 [2024-07-25 12:39:09.724941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.521 [2024-07-25 12:39:09.724944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.724947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d0e40) on tqpair=0x124dec0 00:26:36.521 [2024-07-25 12:39:09.724955] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:36.521 [2024-07-25 12:39:09.724962] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:36.521 [2024-07-25 12:39:09.724968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.724971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.724974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124dec0) 00:26:36.521 [2024-07-25 12:39:09.724981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.521 [2024-07-25 12:39:09.724991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0e40, cid 0, qid 0 00:26:36.521 [2024-07-25 12:39:09.725289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.521 [2024-07-25 12:39:09.725297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.521 [2024-07-25 12:39:09.725300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.725304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d0e40) on tqpair=0x124dec0 00:26:36.521 [2024-07-25 12:39:09.725312] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:36.521 [2024-07-25 12:39:09.725321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:36.521 [2024-07-25 12:39:09.725327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.725330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.725333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124dec0) 00:26:36.521 [2024-07-25 12:39:09.725339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.521 [2024-07-25 12:39:09.725349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0e40, cid 0, qid 0 00:26:36.521 [2024-07-25 12:39:09.725627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.521 [2024-07-25 12:39:09.725635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.521 [2024-07-25 12:39:09.725638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.725642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d0e40) on tqpair=0x124dec0 00:26:36.521 [2024-07-25 12:39:09.725646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:36.521 [2024-07-25 12:39:09.725655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.725659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.725662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124dec0) 00:26:36.521 [2024-07-25 12:39:09.725668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.521 [2024-07-25 12:39:09.725678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0e40, cid 0, qid 0 00:26:36.521 [2024-07-25 12:39:09.725930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.521 [2024-07-25 12:39:09.725936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.521 [2024-07-25 12:39:09.725940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.725943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d0e40) on tqpair=0x124dec0 00:26:36.521 [2024-07-25 12:39:09.725947] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:36.521 [2024-07-25 12:39:09.725951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:36.521 [2024-07-25 12:39:09.725958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:36.521 [2024-07-25 12:39:09.726063] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:36.521 [2024-07-25 12:39:09.726068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:36.521 [2024-07-25 12:39:09.726076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.726081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.726087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124dec0) 00:26:36.521 [2024-07-25 12:39:09.726095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.521 [2024-07-25 12:39:09.726105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0e40, cid 0, qid 0 00:26:36.521 [2024-07-25 12:39:09.726385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.521 [2024-07-25 12:39:09.726392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.521 [2024-07-25 12:39:09.726395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.726401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d0e40) on tqpair=0x124dec0 00:26:36.521 [2024-07-25 12:39:09.726405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:36.521 [2024-07-25 12:39:09.726416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.726420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.726423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124dec0) 00:26:36.521 [2024-07-25 12:39:09.726429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.521 [2024-07-25 12:39:09.726439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0e40, cid 0, qid 0 00:26:36.521 [2024-07-25 12:39:09.726718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.521 [2024-07-25 12:39:09.726725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.521 [2024-07-25 12:39:09.726728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.726731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d0e40) on tqpair=0x124dec0 00:26:36.521 [2024-07-25 12:39:09.726735] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:36.521 [2024-07-25 12:39:09.726740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:36.521 [2024-07-25 12:39:09.726749] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:36.521 [2024-07-25 12:39:09.726756] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:36.521 [2024-07-25 12:39:09.726765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.726768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124dec0) 00:26:36.521 [2024-07-25 12:39:09.726775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.521 [2024-07-25 12:39:09.726784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0e40, cid 0, qid 0 00:26:36.521 [2024-07-25 12:39:09.727158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.521 [2024-07-25 12:39:09.727166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.521 [2024-07-25 12:39:09.727169] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.727172] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124dec0): datao=0, datal=4096, cccid=0 00:26:36.521 [2024-07-25 12:39:09.727176] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12d0e40) on tqpair(0x124dec0): expected_datao=0, payload_size=4096 00:26:36.521 [2024-07-25 12:39:09.727181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.727193] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.727196] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.768749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.521 [2024-07-25 12:39:09.768762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.521 [2024-07-25 12:39:09.768765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.768769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d0e40) on tqpair=0x124dec0 00:26:36.521 [2024-07-25 12:39:09.768777] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:36.521 [2024-07-25 12:39:09.768781] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:36.521 [2024-07-25 12:39:09.768789] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:36.521 [2024-07-25 12:39:09.768793] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:36.521 [2024-07-25 12:39:09.768798] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:36.521 [2024-07-25 12:39:09.768802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:36.521 [2024-07-25 12:39:09.768811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:36.521 [2024-07-25 12:39:09.768822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.768826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.521 [2024-07-25 12:39:09.768830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124dec0) 00:26:36.521 [2024-07-25 12:39:09.768837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:36.521 [2024-07-25 12:39:09.768850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0e40, cid 0, qid 0 00:26:36.521 [2024-07-25 12:39:09.769068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.521 [2024-07-25 12:39:09.769074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.522 [2024-07-25 12:39:09.769077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d0e40) on tqpair=0x124dec0 00:26:36.522 [2024-07-25 12:39:09.769087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124dec0) 00:26:36.522 [2024-07-25 12:39:09.769099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.522 [2024-07-25 12:39:09.769105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x124dec0) 00:26:36.522 [2024-07-25 12:39:09.769117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.522 [2024-07-25 12:39:09.769122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x124dec0) 00:26:36.522 [2024-07-25 12:39:09.769134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.522 [2024-07-25 12:39:09.769140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124dec0) 00:26:36.522 [2024-07-25 12:39:09.769152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.522 [2024-07-25 12:39:09.769156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:36.522 [2024-07-25 12:39:09.769166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:36.522 [2024-07-25 12:39:09.769172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124dec0) 00:26:36.522 [2024-07-25 12:39:09.769184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.522 [2024-07-25 12:39:09.769196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0e40, cid 0, qid 0 00:26:36.522 [2024-07-25 12:39:09.769201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0fc0, cid 1, qid 0 00:26:36.522 [2024-07-25 12:39:09.769205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d1140, cid 2, qid 0 00:26:36.522 [2024-07-25 12:39:09.769209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d12c0, cid 3, qid 0 00:26:36.522 [2024-07-25 12:39:09.769213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d1440, cid 4, qid 0 00:26:36.522 [2024-07-25 12:39:09.769663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.522 [2024-07-25 12:39:09.769670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.522 [2024-07-25 12:39:09.769673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d1440) on tqpair=0x124dec0 00:26:36.522 [2024-07-25 12:39:09.769681] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:36.522 [2024-07-25 12:39:09.769686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:36.522 [2024-07-25 12:39:09.769696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:36.522 [2024-07-25 12:39:09.769703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:36.522 [2024-07-25 12:39:09.769709] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.769715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124dec0) 00:26:36.522 [2024-07-25 12:39:09.769721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:36.522 [2024-07-25 12:39:09.769731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d1440, cid 4, qid 0 00:26:36.522 [2024-07-25 12:39:09.773559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.522 [2024-07-25 12:39:09.773569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.522 [2024-07-25 12:39:09.773572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.773575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d1440) on tqpair=0x124dec0 00:26:36.522 [2024-07-25 12:39:09.773639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:36.522 [2024-07-25 12:39:09.773650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:36.522 [2024-07-25 12:39:09.773657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.773661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124dec0) 00:26:36.522 [2024-07-25 12:39:09.773667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.522 [2024-07-25 12:39:09.773678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d1440, cid 4, qid 0 00:26:36.522 [2024-07-25 12:39:09.773986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.522 [2024-07-25 12:39:09.773992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.522 [2024-07-25 12:39:09.773995] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.774001] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124dec0): datao=0, datal=4096, cccid=4 00:26:36.522 [2024-07-25 12:39:09.774006] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12d1440) on tqpair(0x124dec0): expected_datao=0, payload_size=4096 00:26:36.522 [2024-07-25 12:39:09.774010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.774016] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.774020] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.774161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.522 [2024-07-25 12:39:09.774167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.522 [2024-07-25 12:39:09.774171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.774174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d1440) on tqpair=0x124dec0 00:26:36.522 [2024-07-25 12:39:09.774184] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:36.522 [2024-07-25 12:39:09.774203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:36.522 [2024-07-25 12:39:09.774214] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:36.522 [2024-07-25 12:39:09.774220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.774223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124dec0) 00:26:36.522 [2024-07-25 12:39:09.774229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.522 [2024-07-25 12:39:09.774240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d1440, cid 4, qid 0 00:26:36.522 [2024-07-25 12:39:09.774537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.522 [2024-07-25 12:39:09.774543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.522 [2024-07-25 12:39:09.774553] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.774557] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124dec0): datao=0, datal=4096, cccid=4 00:26:36.522 [2024-07-25 12:39:09.774561] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12d1440) on tqpair(0x124dec0): expected_datao=0, payload_size=4096 00:26:36.522 [2024-07-25 12:39:09.774564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.774570] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.774574] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.815764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.522 [2024-07-25 12:39:09.815778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.522 [2024-07-25 12:39:09.815781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.815785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d1440) on tqpair=0x124dec0 00:26:36.522 [2024-07-25 12:39:09.815803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:36.522 [2024-07-25 12:39:09.815813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:36.522 [2024-07-25 12:39:09.815821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.815825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124dec0) 00:26:36.522 [2024-07-25 12:39:09.815832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.522 [2024-07-25 12:39:09.815845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d1440, cid 4, qid 0 00:26:36.522 [2024-07-25 12:39:09.816089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.522 [2024-07-25 12:39:09.816097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.522 [2024-07-25 12:39:09.816101] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.522 [2024-07-25 12:39:09.816105] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124dec0): datao=0, datal=4096, cccid=4 00:26:36.522 [2024-07-25 12:39:09.816109] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12d1440) on tqpair(0x124dec0): expected_datao=0, payload_size=4096 00:26:36.522 [2024-07-25 12:39:09.816113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.816119] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.816123] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.861561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.523 [2024-07-25 12:39:09.861571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.523 [2024-07-25 12:39:09.861575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.861579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d1440) on tqpair=0x124dec0 00:26:36.523 [2024-07-25 12:39:09.861589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:36.523 [2024-07-25 12:39:09.861599] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:36.523 [2024-07-25 12:39:09.861650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:36.523 [2024-07-25 12:39:09.861659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:36.523 [2024-07-25 12:39:09.861664] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:36.523 [2024-07-25 12:39:09.861669] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:36.523 [2024-07-25 12:39:09.861674] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:36.523 [2024-07-25 12:39:09.861678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:36.523 [2024-07-25 12:39:09.861683] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:36.523 [2024-07-25 12:39:09.861699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.861703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124dec0) 00:26:36.523 [2024-07-25 12:39:09.861710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.523 [2024-07-25 12:39:09.861717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.861721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.861724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x124dec0) 00:26:36.523 [2024-07-25 12:39:09.861730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.523 [2024-07-25 12:39:09.861746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d1440, cid 4, qid 0 00:26:36.523 [2024-07-25 12:39:09.861751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d15c0, cid 5, qid 0 00:26:36.523 [2024-07-25 12:39:09.862126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.523 [2024-07-25 12:39:09.862132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.523 [2024-07-25 12:39:09.862138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.862142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d1440) on tqpair=0x124dec0 00:26:36.523 [2024-07-25 12:39:09.862148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.523 [2024-07-25 12:39:09.862153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.523 [2024-07-25 12:39:09.862156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.862162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d15c0) on tqpair=0x124dec0 00:26:36.523 [2024-07-25 12:39:09.862171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.862175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x124dec0) 00:26:36.523 [2024-07-25 12:39:09.862181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.523 [2024-07-25 12:39:09.862191] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d15c0, cid 5, qid 0 00:26:36.523 [2024-07-25 12:39:09.862494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.523 [2024-07-25 12:39:09.862501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.523 [2024-07-25 12:39:09.862505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.862509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d15c0) on tqpair=0x124dec0 00:26:36.523 [2024-07-25 12:39:09.862518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.862522] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x124dec0) 00:26:36.523 [2024-07-25 12:39:09.862527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.523 [2024-07-25 12:39:09.862537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d15c0, cid 5, qid 0 00:26:36.523 [2024-07-25 12:39:09.862814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.523 [2024-07-25 12:39:09.862821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.523 [2024-07-25 12:39:09.862824] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.862827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d15c0) on tqpair=0x124dec0 00:26:36.523 [2024-07-25 12:39:09.862836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.862840] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x124dec0) 00:26:36.523 [2024-07-25 12:39:09.862846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.523 [2024-07-25 12:39:09.862855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d15c0, cid 5, qid 0 00:26:36.523 [2024-07-25 12:39:09.863137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.523 [2024-07-25 12:39:09.863142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.523 [2024-07-25 12:39:09.863145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d15c0) on tqpair=0x124dec0 00:26:36.523 [2024-07-25 12:39:09.863163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x124dec0) 00:26:36.523 [2024-07-25 12:39:09.863173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.523 [2024-07-25 12:39:09.863180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124dec0) 00:26:36.523 [2024-07-25 12:39:09.863191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.523 [2024-07-25 12:39:09.863198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x124dec0) 00:26:36.523 [2024-07-25 12:39:09.863207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.523 [2024-07-25 12:39:09.863214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x124dec0) 00:26:36.523 [2024-07-25 12:39:09.863222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.523 [2024-07-25 12:39:09.863233] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d15c0, cid 5, qid 0 00:26:36.523 [2024-07-25 12:39:09.863237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d1440, cid 4, qid 0 00:26:36.523 [2024-07-25 12:39:09.863241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d1740, cid 6, qid 0 00:26:36.523 [2024-07-25 12:39:09.863246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d18c0, cid 7, qid 0 00:26:36.523 [2024-07-25 12:39:09.863814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.523 [2024-07-25 12:39:09.863820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.523 [2024-07-25 12:39:09.863823] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863827] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124dec0): datao=0, datal=8192, cccid=5 00:26:36.523 [2024-07-25 12:39:09.863831] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12d15c0) on tqpair(0x124dec0): expected_datao=0, payload_size=8192 00:26:36.523 [2024-07-25 12:39:09.863835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863908] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863912] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.523 [2024-07-25 12:39:09.863922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.523 [2024-07-25 12:39:09.863925] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863929] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124dec0): datao=0, datal=512, cccid=4 00:26:36.523 [2024-07-25 12:39:09.863933] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12d1440) on tqpair(0x124dec0): expected_datao=0, payload_size=512 00:26:36.523 [2024-07-25 12:39:09.863936] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863942] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863945] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.523 [2024-07-25 12:39:09.863956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.523 [2024-07-25 12:39:09.863959] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863962] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124dec0): datao=0, datal=512, cccid=6 00:26:36.523 [2024-07-25 12:39:09.863966] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12d1740) on tqpair(0x124dec0): expected_datao=0, payload_size=512 00:26:36.523 [2024-07-25 12:39:09.863970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863975] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.523 [2024-07-25 12:39:09.863978] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.524 [2024-07-25 12:39:09.863986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:36.524 [2024-07-25 12:39:09.863991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:36.524 [2024-07-25 12:39:09.863994] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:36.524 [2024-07-25 12:39:09.863997] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124dec0): datao=0, datal=4096, cccid=7 00:26:36.524 [2024-07-25 12:39:09.864001] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12d18c0) on tqpair(0x124dec0): expected_datao=0, payload_size=4096 00:26:36.524 [2024-07-25 12:39:09.864005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.524 [2024-07-25 12:39:09.864011] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:36.524 [2024-07-25 12:39:09.864014] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:36.524 [2024-07-25 12:39:09.864058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.524 [2024-07-25 12:39:09.864063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.524 [2024-07-25 12:39:09.864066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.524 [2024-07-25 12:39:09.864070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d15c0) on tqpair=0x124dec0 00:26:36.524 [2024-07-25 12:39:09.864083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.524 [2024-07-25 12:39:09.864088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.524 [2024-07-25 12:39:09.864091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.524 [2024-07-25 12:39:09.864095] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d1440) on tqpair=0x124dec0 00:26:36.524 [2024-07-25 12:39:09.864105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.524 [2024-07-25 12:39:09.864111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.524 [2024-07-25 12:39:09.864114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.524 [2024-07-25 12:39:09.864117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d1740) on tqpair=0x124dec0 00:26:36.524 [2024-07-25 12:39:09.864123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.524 [2024-07-25 12:39:09.864129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.524 [2024-07-25 12:39:09.864132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.524 [2024-07-25 12:39:09.864135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d18c0) on tqpair=0x124dec0 00:26:36.524 ===================================================== 00:26:36.524 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:36.524 ===================================================== 00:26:36.524 Controller Capabilities/Features 00:26:36.524 ================================ 00:26:36.524 Vendor ID: 8086 00:26:36.524 Subsystem Vendor ID: 8086 00:26:36.524 Serial Number: SPDK00000000000001 00:26:36.524 Model Number: SPDK bdev Controller 00:26:36.524 Firmware Version: 24.09 00:26:36.524 Recommended Arb Burst: 6 00:26:36.524 IEEE OUI Identifier: e4 d2 5c 00:26:36.524 Multi-path I/O 00:26:36.524 May have multiple subsystem ports: Yes 00:26:36.524 May have multiple controllers: Yes 00:26:36.524 Associated with SR-IOV VF: No 00:26:36.524 Max Data Transfer Size: 131072 00:26:36.524 Max Number of Namespaces: 32 00:26:36.524 Max Number of I/O Queues: 127 00:26:36.524 NVMe Specification Version (VS): 1.3 00:26:36.524 NVMe Specification Version (Identify): 1.3 00:26:36.524 Maximum Queue Entries: 128 00:26:36.524 Contiguous Queues Required: Yes 00:26:36.524 Arbitration Mechanisms Supported 00:26:36.524 Weighted Round Robin: Not Supported 00:26:36.524 Vendor Specific: Not Supported 00:26:36.524 Reset Timeout: 15000 ms 00:26:36.524 Doorbell Stride: 4 bytes 00:26:36.524 NVM Subsystem Reset: Not Supported 00:26:36.524 Command Sets Supported 00:26:36.524 NVM Command Set: Supported 00:26:36.524 Boot Partition: Not Supported 00:26:36.524 Memory Page Size Minimum: 4096 bytes 00:26:36.524 Memory Page Size Maximum: 4096 bytes 00:26:36.524 Persistent Memory Region: Not Supported 00:26:36.524 Optional Asynchronous Events Supported 00:26:36.524 Namespace Attribute Notices: Supported 00:26:36.524 Firmware Activation Notices: Not Supported 00:26:36.524 ANA Change Notices: Not Supported 00:26:36.524 PLE Aggregate Log Change Notices: Not Supported 00:26:36.524 LBA Status Info Alert Notices: Not Supported 00:26:36.524 EGE Aggregate Log Change Notices: Not Supported 00:26:36.524 Normal NVM Subsystem Shutdown event: Not Supported 00:26:36.524 Zone Descriptor Change Notices: Not Supported 00:26:36.524 Discovery Log Change Notices: Not Supported 00:26:36.524 Controller Attributes 00:26:36.524 128-bit Host Identifier: Supported 00:26:36.524 Non-Operational Permissive Mode: Not Supported 00:26:36.524 NVM Sets: Not Supported 00:26:36.524 Read Recovery Levels: Not Supported 00:26:36.524 Endurance Groups: Not Supported 00:26:36.524 Predictable Latency Mode: Not Supported 00:26:36.524 Traffic Based Keep ALive: Not Supported 00:26:36.524 Namespace Granularity: Not Supported 00:26:36.524 SQ Associations: Not Supported 00:26:36.524 UUID List: Not Supported 00:26:36.524 Multi-Domain Subsystem: Not Supported 00:26:36.524 Fixed Capacity Management: Not Supported 00:26:36.524 Variable Capacity Management: Not Supported 00:26:36.524 Delete Endurance Group: Not Supported 00:26:36.524 Delete NVM Set: Not Supported 00:26:36.524 Extended LBA Formats Supported: Not Supported 00:26:36.524 Flexible Data Placement Supported: Not Supported 00:26:36.524 00:26:36.524 Controller Memory Buffer Support 00:26:36.524 ================================ 00:26:36.524 Supported: No 00:26:36.524 00:26:36.524 Persistent Memory Region Support 00:26:36.524 ================================ 00:26:36.524 Supported: No 00:26:36.524 00:26:36.524 Admin Command Set Attributes 00:26:36.524 ============================ 00:26:36.524 Security Send/Receive: Not Supported 00:26:36.524 Format NVM: Not Supported 00:26:36.524 Firmware Activate/Download: Not Supported 00:26:36.524 Namespace Management: Not Supported 00:26:36.524 Device Self-Test: Not Supported 00:26:36.524 Directives: Not Supported 00:26:36.524 NVMe-MI: Not Supported 00:26:36.524 Virtualization Management: Not Supported 00:26:36.524 Doorbell Buffer Config: Not Supported 00:26:36.524 Get LBA Status Capability: Not Supported 00:26:36.524 Command & Feature Lockdown Capability: Not Supported 00:26:36.524 Abort Command Limit: 4 00:26:36.524 Async Event Request Limit: 4 00:26:36.524 Number of Firmware Slots: N/A 00:26:36.524 Firmware Slot 1 Read-Only: N/A 00:26:36.524 Firmware Activation Without Reset: N/A 00:26:36.524 Multiple Update Detection Support: N/A 00:26:36.524 Firmware Update Granularity: No Information Provided 00:26:36.524 Per-Namespace SMART Log: No 00:26:36.524 Asymmetric Namespace Access Log Page: Not Supported 00:26:36.524 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:36.524 Command Effects Log Page: Supported 00:26:36.524 Get Log Page Extended Data: Supported 00:26:36.524 Telemetry Log Pages: Not Supported 00:26:36.524 Persistent Event Log Pages: Not Supported 00:26:36.524 Supported Log Pages Log Page: May Support 00:26:36.524 Commands Supported & Effects Log Page: Not Supported 00:26:36.524 Feature Identifiers & Effects Log Page:May Support 00:26:36.524 NVMe-MI Commands & Effects Log Page: May Support 00:26:36.524 Data Area 4 for Telemetry Log: Not Supported 00:26:36.524 Error Log Page Entries Supported: 128 00:26:36.524 Keep Alive: Supported 00:26:36.524 Keep Alive Granularity: 10000 ms 00:26:36.524 00:26:36.524 NVM Command Set Attributes 00:26:36.524 ========================== 00:26:36.524 Submission Queue Entry Size 00:26:36.524 Max: 64 00:26:36.524 Min: 64 00:26:36.524 Completion Queue Entry Size 00:26:36.524 Max: 16 00:26:36.524 Min: 16 00:26:36.524 Number of Namespaces: 32 00:26:36.524 Compare Command: Supported 00:26:36.524 Write Uncorrectable Command: Not Supported 00:26:36.524 Dataset Management Command: Supported 00:26:36.524 Write Zeroes Command: Supported 00:26:36.524 Set Features Save Field: Not Supported 00:26:36.524 Reservations: Supported 00:26:36.524 Timestamp: Not Supported 00:26:36.524 Copy: Supported 00:26:36.524 Volatile Write Cache: Present 00:26:36.524 Atomic Write Unit (Normal): 1 00:26:36.524 Atomic Write Unit (PFail): 1 00:26:36.524 Atomic Compare & Write Unit: 1 00:26:36.524 Fused Compare & Write: Supported 00:26:36.524 Scatter-Gather List 00:26:36.524 SGL Command Set: Supported 00:26:36.524 SGL Keyed: Supported 00:26:36.524 SGL Bit Bucket Descriptor: Not Supported 00:26:36.524 SGL Metadata Pointer: Not Supported 00:26:36.524 Oversized SGL: Not Supported 00:26:36.524 SGL Metadata Address: Not Supported 00:26:36.524 SGL Offset: Supported 00:26:36.524 Transport SGL Data Block: Not Supported 00:26:36.524 Replay Protected Memory Block: Not Supported 00:26:36.524 00:26:36.524 Firmware Slot Information 00:26:36.524 ========================= 00:26:36.524 Active slot: 1 00:26:36.524 Slot 1 Firmware Revision: 24.09 00:26:36.524 00:26:36.524 00:26:36.525 Commands Supported and Effects 00:26:36.525 ============================== 00:26:36.525 Admin Commands 00:26:36.525 -------------- 00:26:36.525 Get Log Page (02h): Supported 00:26:36.525 Identify (06h): Supported 00:26:36.525 Abort (08h): Supported 00:26:36.525 Set Features (09h): Supported 00:26:36.525 Get Features (0Ah): Supported 00:26:36.525 Asynchronous Event Request (0Ch): Supported 00:26:36.525 Keep Alive (18h): Supported 00:26:36.525 I/O Commands 00:26:36.525 ------------ 00:26:36.525 Flush (00h): Supported LBA-Change 00:26:36.525 Write (01h): Supported LBA-Change 00:26:36.525 Read (02h): Supported 00:26:36.525 Compare (05h): Supported 00:26:36.525 Write Zeroes (08h): Supported LBA-Change 00:26:36.525 Dataset Management (09h): Supported LBA-Change 00:26:36.525 Copy (19h): Supported LBA-Change 00:26:36.525 00:26:36.525 Error Log 00:26:36.525 ========= 00:26:36.525 00:26:36.525 Arbitration 00:26:36.525 =========== 00:26:36.525 Arbitration Burst: 1 00:26:36.525 00:26:36.525 Power Management 00:26:36.525 ================ 00:26:36.525 Number of Power States: 1 00:26:36.525 Current Power State: Power State #0 00:26:36.525 Power State #0: 00:26:36.525 Max Power: 0.00 W 00:26:36.525 Non-Operational State: Operational 00:26:36.525 Entry Latency: Not Reported 00:26:36.525 Exit Latency: Not Reported 00:26:36.525 Relative Read Throughput: 0 00:26:36.525 Relative Read Latency: 0 00:26:36.525 Relative Write Throughput: 0 00:26:36.525 Relative Write Latency: 0 00:26:36.525 Idle Power: Not Reported 00:26:36.525 Active Power: Not Reported 00:26:36.525 Non-Operational Permissive Mode: Not Supported 00:26:36.525 00:26:36.525 Health Information 00:26:36.525 ================== 00:26:36.525 Critical Warnings: 00:26:36.525 Available Spare Space: OK 00:26:36.525 Temperature: OK 00:26:36.525 Device Reliability: OK 00:26:36.525 Read Only: No 00:26:36.525 Volatile Memory Backup: OK 00:26:36.525 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:36.525 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:36.525 Available Spare: 0% 00:26:36.525 Available Spare Threshold: 0% 00:26:36.525 Life Percentage Used:[2024-07-25 12:39:09.864236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.864241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x124dec0) 00:26:36.525 [2024-07-25 12:39:09.864247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.525 [2024-07-25 12:39:09.864257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d18c0, cid 7, qid 0 00:26:36.525 [2024-07-25 12:39:09.864540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.525 [2024-07-25 12:39:09.864554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.525 [2024-07-25 12:39:09.864558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.864561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d18c0) on tqpair=0x124dec0 00:26:36.525 [2024-07-25 12:39:09.864595] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:36.525 [2024-07-25 12:39:09.864605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d0e40) on tqpair=0x124dec0 00:26:36.525 [2024-07-25 12:39:09.864610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.525 [2024-07-25 12:39:09.864615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d0fc0) on tqpair=0x124dec0 00:26:36.525 [2024-07-25 12:39:09.864619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.525 [2024-07-25 12:39:09.864626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d1140) on tqpair=0x124dec0 00:26:36.525 [2024-07-25 12:39:09.864630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.525 [2024-07-25 12:39:09.864634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d12c0) on tqpair=0x124dec0 00:26:36.525 [2024-07-25 12:39:09.864638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.525 [2024-07-25 12:39:09.864646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.864650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.864653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124dec0) 00:26:36.525 [2024-07-25 12:39:09.864659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.525 [2024-07-25 12:39:09.864670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d12c0, cid 3, qid 0 00:26:36.525 [2024-07-25 12:39:09.864910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.525 [2024-07-25 12:39:09.864917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.525 [2024-07-25 12:39:09.864920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.864925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d12c0) on tqpair=0x124dec0 00:26:36.525 [2024-07-25 12:39:09.864932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.864937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.864940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124dec0) 00:26:36.525 [2024-07-25 12:39:09.864946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.525 [2024-07-25 12:39:09.864959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d12c0, cid 3, qid 0 00:26:36.525 [2024-07-25 12:39:09.865283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.525 [2024-07-25 12:39:09.865290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.525 [2024-07-25 12:39:09.865294] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.865298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d12c0) on tqpair=0x124dec0 00:26:36.525 [2024-07-25 12:39:09.865302] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:36.525 [2024-07-25 12:39:09.865307] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:36.525 [2024-07-25 12:39:09.865316] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.865321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.865327] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124dec0) 00:26:36.525 [2024-07-25 12:39:09.865334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.525 [2024-07-25 12:39:09.865344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d12c0, cid 3, qid 0 00:26:36.525 [2024-07-25 12:39:09.869559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.525 [2024-07-25 12:39:09.869567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.525 [2024-07-25 12:39:09.869571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.869574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d12c0) on tqpair=0x124dec0 00:26:36.525 [2024-07-25 12:39:09.869585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.869589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:36.525 [2024-07-25 12:39:09.869592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124dec0) 00:26:36.525 [2024-07-25 12:39:09.869601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.525 [2024-07-25 12:39:09.869612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d12c0, cid 3, qid 0 00:26:36.525 [2024-07-25 12:39:09.869910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:36.526 [2024-07-25 12:39:09.869916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:36.526 [2024-07-25 12:39:09.869919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:36.526 [2024-07-25 12:39:09.869922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12d12c0) on tqpair=0x124dec0 00:26:36.526 [2024-07-25 12:39:09.869930] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:26:36.526 0% 00:26:36.526 Data Units Read: 0 00:26:36.526 Data Units Written: 0 00:26:36.526 Host Read Commands: 0 00:26:36.526 Host Write Commands: 0 00:26:36.526 Controller Busy Time: 0 minutes 00:26:36.526 Power Cycles: 0 00:26:36.526 Power On Hours: 0 hours 00:26:36.526 Unsafe Shutdowns: 0 00:26:36.526 Unrecoverable Media Errors: 0 00:26:36.526 Lifetime Error Log Entries: 0 00:26:36.526 Warning Temperature Time: 0 minutes 00:26:36.526 Critical Temperature Time: 0 minutes 00:26:36.526 00:26:36.526 Number of Queues 00:26:36.526 ================ 00:26:36.526 Number of I/O Submission Queues: 127 00:26:36.526 Number of I/O Completion Queues: 127 00:26:36.526 00:26:36.526 Active Namespaces 00:26:36.526 ================= 00:26:36.526 Namespace ID:1 00:26:36.526 Error Recovery Timeout: Unlimited 00:26:36.526 Command Set Identifier: NVM (00h) 00:26:36.526 Deallocate: Supported 00:26:36.526 Deallocated/Unwritten Error: Not Supported 00:26:36.526 Deallocated Read Value: Unknown 00:26:36.526 Deallocate in Write Zeroes: Not Supported 00:26:36.526 Deallocated Guard Field: 0xFFFF 00:26:36.526 Flush: Supported 00:26:36.526 Reservation: Supported 00:26:36.526 Namespace Sharing Capabilities: Multiple Controllers 00:26:36.526 Size (in LBAs): 131072 (0GiB) 00:26:36.526 Capacity (in LBAs): 131072 (0GiB) 00:26:36.526 Utilization (in LBAs): 131072 (0GiB) 00:26:36.526 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:36.526 EUI64: ABCDEF0123456789 00:26:36.526 UUID: c3317fea-93e3-4081-aaca-2d0d4200fe9d 00:26:36.526 Thin Provisioning: Not Supported 00:26:36.526 Per-NS Atomic Units: Yes 00:26:36.526 Atomic Boundary Size (Normal): 0 00:26:36.526 Atomic Boundary Size (PFail): 0 00:26:36.526 Atomic Boundary Offset: 0 00:26:36.526 Maximum Single Source Range Length: 65535 00:26:36.526 Maximum Copy Length: 65535 00:26:36.526 Maximum Source Range Count: 1 00:26:36.526 NGUID/EUI64 Never Reused: No 00:26:36.526 Namespace Write Protected: No 00:26:36.526 Number of LBA Formats: 1 00:26:36.526 Current LBA Format: LBA Format #00 00:26:36.526 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:36.526 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.526 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.526 rmmod nvme_tcp 00:26:36.526 rmmod nvme_fabrics 00:26:36.787 rmmod nvme_keyring 00:26:36.787 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.787 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:26:36.787 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:26:36.787 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 525896 ']' 00:26:36.787 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 525896 00:26:36.787 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 525896 ']' 00:26:36.787 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 525896 00:26:36.787 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:26:36.787 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.787 12:39:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 525896 00:26:36.787 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:36.787 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:36.787 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 525896' 00:26:36.787 killing process with pid 525896 00:26:36.787 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 525896 00:26:36.787 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 525896 00:26:37.048 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:37.048 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:37.048 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:37.048 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:37.048 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:37.048 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.048 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.048 12:39:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.959 12:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:38.959 00:26:38.959 real 0m12.668s 00:26:38.959 user 0m9.046s 00:26:38.959 sys 0m6.840s 00:26:38.959 12:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:38.959 12:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:38.959 ************************************ 00:26:38.959 END TEST nvmf_identify 00:26:38.959 ************************************ 00:26:38.959 12:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:26:38.959 12:39:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:38.959 12:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:38.959 12:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.959 12:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.220 ************************************ 00:26:39.220 START TEST nvmf_perf 00:26:39.220 ************************************ 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:39.220 * Looking for test storage... 00:26:39.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.220 12:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:47.358 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:47.359 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:47.359 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:47.359 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:47.359 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.359 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:47.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:26:47.620 00:26:47.620 --- 10.0.0.2 ping statistics --- 00:26:47.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.620 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:26:47.620 00:26:47.620 --- 10.0.0.1 ping statistics --- 00:26:47.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.620 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:47.620 12:39:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=531032 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 531032 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 531032 ']' 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.620 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:47.881 [2024-07-25 12:39:21.081679] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:26:47.881 [2024-07-25 12:39:21.081743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.881 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.881 [2024-07-25 12:39:21.173346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:47.881 [2024-07-25 12:39:21.268862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.881 [2024-07-25 12:39:21.268915] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.881 [2024-07-25 12:39:21.268924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.881 [2024-07-25 12:39:21.268931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.881 [2024-07-25 12:39:21.268936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.881 [2024-07-25 12:39:21.269083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.881 [2024-07-25 12:39:21.269230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.881 [2024-07-25 12:39:21.269381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.881 [2024-07-25 12:39:21.269382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.821 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.821 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:26:48.821 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:48.821 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:48.821 12:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:48.821 12:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.821 12:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:48.821 12:39:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:52.138 12:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:52.138 12:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:52.138 12:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:52.138 12:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:52.432 12:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:52.432 12:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:52.432 12:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:52.432 12:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:52.432 12:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:52.432 [2024-07-25 12:39:25.756054] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.432 12:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.695 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:52.695 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.957 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:52.957 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:53.218 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.218 [2024-07-25 12:39:26.620125] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.479 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:53.479 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:53.479 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:53.479 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:53.479 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:54.863 Initializing NVMe Controllers 00:26:54.863 Attached to NVMe Controller at 0000:65:00.0 [8086:0a54] 00:26:54.863 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:54.863 Initialization complete. Launching workers. 00:26:54.863 ======================================================== 00:26:54.863 Latency(us) 00:26:54.863 Device Information : IOPS MiB/s Average min max 00:26:54.863 PCIE (0000:65:00.0) NSID 1 from core 0: 84976.91 331.94 375.95 44.34 5246.57 00:26:54.863 ======================================================== 00:26:54.863 Total : 84976.91 331.94 375.95 44.34 5246.57 00:26:54.863 00:26:54.863 12:39:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:54.863 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.246 Initializing NVMe Controllers 00:26:56.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:56.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:56.246 Initialization complete. Launching workers. 00:26:56.246 ======================================================== 00:26:56.246 Latency(us) 00:26:56.246 Device Information : IOPS MiB/s Average min max 00:26:56.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.00 0.29 13606.84 374.44 45659.94 00:26:56.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 52.00 0.20 19319.24 7963.87 51878.70 00:26:56.246 ======================================================== 00:26:56.246 Total : 126.00 0.49 15964.34 374.44 51878.70 00:26:56.246 00:26:56.246 12:39:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:56.246 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.184 Initializing NVMe Controllers 00:26:57.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:57.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:57.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:57.184 Initialization complete. Launching workers. 00:26:57.184 ======================================================== 00:26:57.184 Latency(us) 00:26:57.184 Device Information : IOPS MiB/s Average min max 00:26:57.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4309.99 16.84 7470.78 1054.52 15531.28 00:26:57.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3659.99 14.30 8820.38 4567.15 26974.10 00:26:57.184 ======================================================== 00:26:57.184 Total : 7969.98 31.13 8090.54 1054.52 26974.10 00:26:57.184 00:26:57.442 12:39:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:57.442 12:39:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:57.442 12:39:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:57.442 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.982 Initializing NVMe Controllers 00:26:59.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.982 Controller IO queue size 128, less than required. 00:26:59.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:59.982 Controller IO queue size 128, less than required. 00:26:59.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:59.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:59.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:59.982 Initialization complete. Launching workers. 00:26:59.982 ======================================================== 00:26:59.982 Latency(us) 00:26:59.982 Device Information : IOPS MiB/s Average min max 00:26:59.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1138.87 284.72 114171.36 56187.10 183180.69 00:26:59.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.89 149.22 227447.66 101481.58 360264.09 00:26:59.982 ======================================================== 00:26:59.982 Total : 1735.76 433.94 153124.34 56187.10 360264.09 00:26:59.982 00:26:59.982 12:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:59.982 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.241 No valid NVMe controllers or AIO or URING devices found 00:27:00.241 Initializing NVMe Controllers 00:27:00.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.241 Controller IO queue size 128, less than required. 00:27:00.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:00.241 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:00.241 Controller IO queue size 128, less than required. 00:27:00.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:00.241 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:00.241 WARNING: Some requested NVMe devices were skipped 00:27:00.241 12:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:00.241 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.784 Initializing NVMe Controllers 00:27:02.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.784 Controller IO queue size 128, less than required. 00:27:02.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.784 Controller IO queue size 128, less than required. 00:27:02.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:02.784 Initialization complete. Launching workers. 00:27:02.784 00:27:02.784 ==================== 00:27:02.784 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:02.784 TCP transport: 00:27:02.784 polls: 20809 00:27:02.784 idle_polls: 14380 00:27:02.784 sock_completions: 6429 00:27:02.784 nvme_completions: 7953 00:27:02.784 submitted_requests: 11968 00:27:02.784 queued_requests: 1 00:27:02.784 00:27:02.784 ==================== 00:27:02.784 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:02.784 TCP transport: 00:27:02.784 polls: 24262 00:27:02.784 idle_polls: 17587 00:27:02.784 sock_completions: 6675 00:27:02.784 nvme_completions: 4175 00:27:02.784 submitted_requests: 6246 00:27:02.784 queued_requests: 1 00:27:02.784 ======================================================== 00:27:02.784 Latency(us) 00:27:02.784 Device Information : IOPS MiB/s Average min max 00:27:02.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1987.96 496.99 65503.16 40036.14 104462.50 00:27:02.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1043.48 260.87 123850.84 38104.32 202564.27 00:27:02.784 ======================================================== 00:27:02.785 Total : 3031.44 757.86 85587.54 38104.32 202564.27 00:27:02.785 00:27:02.785 12:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:02.785 12:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.785 rmmod nvme_tcp 00:27:02.785 rmmod nvme_fabrics 00:27:02.785 rmmod nvme_keyring 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 531032 ']' 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 531032 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 531032 ']' 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 531032 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 531032 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 531032' 00:27:02.785 killing process with pid 531032 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 531032 00:27:02.785 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 531032 00:27:05.328 12:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:05.328 12:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:05.328 12:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:05.328 12:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.328 12:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.328 12:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.328 12:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.328 12:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.239 12:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:07.239 00:27:07.239 real 0m28.146s 00:27:07.239 user 1m10.410s 00:27:07.239 sys 0m8.992s 00:27:07.239 12:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:07.239 12:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:07.239 ************************************ 00:27:07.239 END TEST nvmf_perf 00:27:07.239 ************************************ 00:27:07.239 12:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:27:07.239 12:39:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:07.239 12:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:07.239 12:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.239 12:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.239 ************************************ 00:27:07.239 START TEST nvmf_fio_host 00:27:07.239 ************************************ 00:27:07.239 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:07.504 * Looking for test storage... 00:27:07.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.505 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.506 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.506 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:07.507 12:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:15.649 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:15.649 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:15.649 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:15.649 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.649 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:15.650 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.650 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.650 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:15.650 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:15.650 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.650 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.650 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.650 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.650 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:15.650 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:15.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:27:15.911 00:27:15.911 --- 10.0.0.2 ping statistics --- 00:27:15.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.911 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:27:15.911 00:27:15.911 --- 10.0.0.1 ping statistics --- 00:27:15.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.911 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:15.911 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:15.912 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.912 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=538233 00:27:15.912 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:15.912 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:15.912 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 538233 00:27:15.912 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 538233 ']' 00:27:15.912 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.912 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:15.912 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.912 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:15.912 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.912 [2024-07-25 12:39:49.249823] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:27:15.912 [2024-07-25 12:39:49.249884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.912 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.172 [2024-07-25 12:39:49.342054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.172 [2024-07-25 12:39:49.435950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.172 [2024-07-25 12:39:49.436009] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.172 [2024-07-25 12:39:49.436017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.172 [2024-07-25 12:39:49.436023] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.172 [2024-07-25 12:39:49.436029] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.172 [2024-07-25 12:39:49.436164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.172 [2024-07-25 12:39:49.436318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.172 [2024-07-25 12:39:49.436471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.172 [2024-07-25 12:39:49.436473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.743 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:16.743 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:27:16.743 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:17.004 [2024-07-25 12:39:50.327071] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.004 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:17.004 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.004 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.004 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:17.263 Malloc1 00:27:17.264 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.524 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:17.785 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.047 [2024-07-25 12:39:51.304361] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.047 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:18.308 12:39:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:18.569 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:18.569 fio-3.35 00:27:18.569 Starting 1 thread 00:27:18.569 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.124 00:27:21.124 test: (groupid=0, jobs=1): err= 0: pid=538929: Thu Jul 25 12:39:54 2024 00:27:21.124 read: IOPS=3250, BW=12.7MiB/s (13.3MB/s)(25.6MiB/2017msec) 00:27:21.124 slat (nsec): min=1921, max=254666, avg=2201.72, stdev=4405.34 00:27:21.124 clat (usec): min=5983, max=36849, avg=21167.65, stdev=2162.20 00:27:21.124 lat (usec): min=6017, max=36851, avg=21169.85, stdev=2161.58 00:27:21.124 clat percentiles (usec): 00:27:21.124 | 1.00th=[16909], 5.00th=[18220], 10.00th=[18744], 20.00th=[19530], 00:27:21.124 | 30.00th=[20055], 40.00th=[20579], 50.00th=[21103], 60.00th=[21627], 00:27:21.124 | 70.00th=[22152], 80.00th=[22938], 90.00th=[23462], 95.00th=[24249], 00:27:21.124 | 99.00th=[25560], 99.50th=[28967], 99.90th=[36439], 99.95th=[36963], 00:27:21.124 | 99.99th=[36963] 00:27:21.124 bw ( KiB/s): min=12248, max=13352, per=99.77%, avg=12974.00, stdev=514.88, samples=4 00:27:21.124 iops : min= 3062, max= 3338, avg=3243.50, stdev=128.72, samples=4 00:27:21.124 write: IOPS=3266, BW=12.8MiB/s (13.4MB/s)(25.7MiB/2017msec); 0 zone resets 00:27:21.124 slat (nsec): min=1989, max=241908, avg=2316.15, stdev=3279.53 00:27:21.124 clat (usec): min=2638, max=33417, avg=17894.78, stdev=1779.71 00:27:21.124 lat (usec): min=2655, max=33419, avg=17897.10, stdev=1779.18 00:27:21.124 clat percentiles (usec): 00:27:21.124 | 1.00th=[14222], 5.00th=[15664], 10.00th=[16057], 20.00th=[16712], 00:27:21.124 | 30.00th=[17171], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:27:21.124 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19530], 95.00th=[20055], 00:27:21.124 | 99.00th=[21365], 99.50th=[24511], 99.90th=[31327], 99.95th=[33162], 00:27:21.124 | 99.99th=[33424] 00:27:21.124 bw ( KiB/s): min=12800, max=13312, per=99.98%, avg=13064.00, stdev=214.17, samples=4 00:27:21.124 iops : min= 3200, max= 3328, avg=3266.00, stdev=53.54, samples=4 00:27:21.124 lat (msec) : 4=0.04%, 10=0.40%, 20=60.46%, 50=39.10% 00:27:21.124 cpu : usr=64.14%, sys=34.97%, ctx=122, majf=0, minf=44 00:27:21.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:27:21.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:21.124 issued rwts: total=6557,6589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:21.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:21.124 00:27:21.124 Run status group 0 (all jobs): 00:27:21.124 READ: bw=12.7MiB/s (13.3MB/s), 12.7MiB/s-12.7MiB/s (13.3MB/s-13.3MB/s), io=25.6MiB (26.9MB), run=2017-2017msec 00:27:21.124 WRITE: bw=12.8MiB/s (13.4MB/s), 12.8MiB/s-12.8MiB/s (13.4MB/s-13.4MB/s), io=25.7MiB (27.0MB), run=2017-2017msec 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:21.124 12:39:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:21.384 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:21.384 fio-3.35 00:27:21.384 Starting 1 thread 00:27:21.384 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.928 00:27:23.928 test: (groupid=0, jobs=1): err= 0: pid=539433: Thu Jul 25 12:39:56 2024 00:27:23.928 read: IOPS=4036, BW=63.1MiB/s (66.1MB/s)(126MiB/2004msec) 00:27:23.928 slat (usec): min=3, max=100, avg= 3.41, stdev= 1.54 00:27:23.928 clat (usec): min=2095, max=39782, avg=18186.56, stdev=6867.55 00:27:23.928 lat (usec): min=2098, max=39785, avg=18189.97, stdev=6867.53 00:27:23.928 clat percentiles (usec): 00:27:23.928 | 1.00th=[ 4686], 5.00th=[ 5735], 10.00th=[ 6718], 20.00th=[ 9110], 00:27:23.928 | 30.00th=[17433], 40.00th=[18482], 50.00th=[19530], 60.00th=[20579], 00:27:23.928 | 70.00th=[21627], 80.00th=[23200], 90.00th=[25035], 95.00th=[28443], 00:27:23.928 | 99.00th=[33424], 99.50th=[34866], 99.90th=[36963], 99.95th=[38011], 00:27:23.928 | 99.99th=[39584] 00:27:23.928 bw ( KiB/s): min=29344, max=39712, per=51.38%, avg=33184.00, stdev=4523.07, samples=4 00:27:23.928 iops : min= 1834, max= 2482, avg=2074.00, stdev=282.69, samples=4 00:27:23.928 write: IOPS=2477, BW=38.7MiB/s (40.6MB/s)(68.6MiB/1771msec); 0 zone resets 00:27:23.928 slat (usec): min=36, max=357, avg=38.59, stdev= 9.81 00:27:23.928 clat (usec): min=2008, max=53615, avg=23971.70, stdev=9424.44 00:27:23.928 lat (usec): min=2050, max=53655, avg=24010.29, stdev=9424.11 00:27:23.928 clat percentiles (usec): 00:27:23.928 | 1.00th=[ 7111], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[10290], 00:27:23.928 | 30.00th=[23200], 40.00th=[25297], 50.00th=[27919], 60.00th=[29230], 00:27:23.928 | 70.00th=[30016], 80.00th=[31065], 90.00th=[32900], 95.00th=[34341], 00:27:23.928 | 99.00th=[41681], 99.50th=[44827], 99.90th=[46400], 99.95th=[46400], 00:27:23.928 | 99.99th=[53740] 00:27:23.929 bw ( KiB/s): min=30720, max=42848, per=87.68%, avg=34760.00, stdev=5475.17, samples=4 00:27:23.929 iops : min= 1920, max= 2678, avg=2172.50, stdev=342.20, samples=4 00:27:23.929 lat (msec) : 4=0.32%, 10=19.86%, 20=24.52%, 50=55.29%, 100=0.01% 00:27:23.929 cpu : usr=76.44%, sys=22.47%, ctx=45, majf=0, minf=71 00:27:23.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:23.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:23.929 issued rwts: total=8090,4388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:23.929 00:27:23.929 Run status group 0 (all jobs): 00:27:23.929 READ: bw=63.1MiB/s (66.1MB/s), 63.1MiB/s-63.1MiB/s (66.1MB/s-66.1MB/s), io=126MiB (133MB), run=2004-2004msec 00:27:23.929 WRITE: bw=38.7MiB/s (40.6MB/s), 38.7MiB/s-38.7MiB/s (40.6MB/s-40.6MB/s), io=68.6MiB (71.9MB), run=1771-1771msec 00:27:23.929 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.929 rmmod nvme_tcp 00:27:23.929 rmmod nvme_fabrics 00:27:23.929 rmmod nvme_keyring 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 538233 ']' 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 538233 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 538233 ']' 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 538233 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 538233 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 538233' 00:27:23.929 killing process with pid 538233 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 538233 00:27:23.929 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 538233 00:27:24.190 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:24.190 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:24.190 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:24.190 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:24.190 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:24.190 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.190 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.190 12:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.183 12:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:26.183 00:27:26.183 real 0m18.912s 00:27:26.183 user 0m56.444s 00:27:26.183 sys 0m8.644s 00:27:26.183 12:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:26.183 12:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.183 ************************************ 00:27:26.183 END TEST nvmf_fio_host 00:27:26.183 ************************************ 00:27:26.183 12:39:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:27:26.183 12:39:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:26.183 12:39:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:26.183 12:39:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.183 12:39:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.443 ************************************ 00:27:26.443 START TEST nvmf_failover 00:27:26.443 ************************************ 00:27:26.443 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:26.443 * Looking for test storage... 00:27:26.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:27:26.444 12:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:34.587 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:34.587 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:34.587 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:34.588 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:34.588 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.588 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.848 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.848 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.848 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:34.848 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.848 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.848 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.848 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:34.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:27:34.848 00:27:34.849 --- 10.0.0.2 ping statistics --- 00:27:34.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.849 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:27:34.849 00:27:34.849 --- 10.0.0.1 ping statistics --- 00:27:34.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.849 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=544248 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 544248 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 544248 ']' 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.849 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:35.109 [2024-07-25 12:40:08.283035] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:27:35.109 [2024-07-25 12:40:08.283097] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.109 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.109 [2024-07-25 12:40:08.372386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:35.109 [2024-07-25 12:40:08.481591] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.109 [2024-07-25 12:40:08.481654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.109 [2024-07-25 12:40:08.481666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.109 [2024-07-25 12:40:08.481676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.109 [2024-07-25 12:40:08.481684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.109 [2024-07-25 12:40:08.481853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.109 [2024-07-25 12:40:08.482008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.109 [2024-07-25 12:40:08.482007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.052 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:36.052 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:27:36.052 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.052 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:36.052 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:36.052 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.052 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:36.052 [2024-07-25 12:40:09.383159] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.052 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:36.313 Malloc0 00:27:36.313 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:36.573 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:36.835 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.096 [2024-07-25 12:40:10.323094] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.096 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:37.356 [2024-07-25 12:40:10.551823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:37.356 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:37.616 [2024-07-25 12:40:10.780649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:37.616 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:37.616 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=544808 00:27:37.616 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:37.616 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 544808 /var/tmp/bdevperf.sock 00:27:37.616 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 544808 ']' 00:27:37.616 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:37.616 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:37.616 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:37.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:37.616 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:37.616 12:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:38.556 12:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:38.556 12:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:27:38.556 12:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:38.817 NVMe0n1 00:27:38.817 12:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:39.078 00:27:39.078 12:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=545047 00:27:39.078 12:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:39.078 12:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:40.020 12:40:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.280 12:40:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:43.581 12:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:43.581 00:27:43.581 12:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:43.841 12:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:47.140 12:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.140 [2024-07-25 12:40:20.352897] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.140 12:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:48.082 12:40:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:48.343 12:40:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 545047 00:27:54.931 0 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 544808 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 544808 ']' 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 544808 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 544808 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 544808' 00:27:54.931 killing process with pid 544808 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 544808 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 544808 00:27:54.931 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:54.931 [2024-07-25 12:40:10.869618] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:27:54.931 [2024-07-25 12:40:10.869736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544808 ] 00:27:54.931 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.931 [2024-07-25 12:40:10.960795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.931 [2024-07-25 12:40:11.053822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.931 Running I/O for 15 seconds... 00:27:54.931 [2024-07-25 12:40:13.617048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.931 [2024-07-25 12:40:13.617399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.931 [2024-07-25 12:40:13.617406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.617990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.617996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.932 [2024-07-25 12:40:13.618005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.932 [2024-07-25 12:40:13.618011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.933 [2024-07-25 12:40:13.618493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.933 [2024-07-25 12:40:13.618508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.933 [2024-07-25 12:40:13.618523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.933 [2024-07-25 12:40:13.618538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.933 [2024-07-25 12:40:13.618561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.933 [2024-07-25 12:40:13.618576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.933 [2024-07-25 12:40:13.618591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.933 [2024-07-25 12:40:13.618600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.934 [2024-07-25 12:40:13.618622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.934 [2024-07-25 12:40:13.618636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.934 [2024-07-25 12:40:13.618652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.934 [2024-07-25 12:40:13.618670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.934 [2024-07-25 12:40:13.618686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.934 [2024-07-25 12:40:13.618709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.934 [2024-07-25 12:40:13.618728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.934 [2024-07-25 12:40:13.618750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.934 [2024-07-25 12:40:13.618765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.934 [2024-07-25 12:40:13.618781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.618985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.618993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.619000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.619016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.934 [2024-07-25 12:40:13.619031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.619046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.619062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.619077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.619091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.619111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.619127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.934 [2024-07-25 12:40:13.619141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a0490 is same with the state(5) to be set 00:27:54.934 [2024-07-25 12:40:13.619159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.934 [2024-07-25 12:40:13.619165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.934 [2024-07-25 12:40:13.619172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28904 len:8 PRP1 0x0 PRP2 0x0 00:27:54.934 [2024-07-25 12:40:13.619179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619234] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6a0490 was disconnected and freed. reset controller. 00:27:54.934 [2024-07-25 12:40:13.619244] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:54.934 [2024-07-25 12:40:13.619272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.934 [2024-07-25 12:40:13.619280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.934 [2024-07-25 12:40:13.619289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.935 [2024-07-25 12:40:13.619296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:13.619303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.935 [2024-07-25 12:40:13.619315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:13.619323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.935 [2024-07-25 12:40:13.619329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:13.619337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.935 [2024-07-25 12:40:13.622665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.935 [2024-07-25 12:40:13.622699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6aca10 (9): Bad file descriptor 00:27:54.935 [2024-07-25 12:40:13.748954] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:54.935 [2024-07-25 12:40:17.130545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.935 [2024-07-25 12:40:17.130593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.130603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.935 [2024-07-25 12:40:17.130615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.130622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.935 [2024-07-25 12:40:17.130628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.130636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.935 [2024-07-25 12:40:17.130642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.130649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aca10 is same with the state(5) to be set 00:27:54.935 [2024-07-25 12:40:17.131241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.935 [2024-07-25 12:40:17.131644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.935 [2024-07-25 12:40:17.131652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.936 [2024-07-25 12:40:17.131867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.936 [2024-07-25 12:40:17.131881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.936 [2024-07-25 12:40:17.131897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.936 [2024-07-25 12:40:17.131912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.936 [2024-07-25 12:40:17.131927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.936 [2024-07-25 12:40:17.131941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.936 [2024-07-25 12:40:17.131956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.131987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.131995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.132002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.132010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.132017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.132025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.132031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.132039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.132045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.132054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.132060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.132069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.132075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.132083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.936 [2024-07-25 12:40:17.132089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.936 [2024-07-25 12:40:17.132097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.937 [2024-07-25 12:40:17.132551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.937 [2024-07-25 12:40:17.132567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.937 [2024-07-25 12:40:17.132581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.937 [2024-07-25 12:40:17.132596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.937 [2024-07-25 12:40:17.132611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.937 [2024-07-25 12:40:17.132626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.937 [2024-07-25 12:40:17.132640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.937 [2024-07-25 12:40:17.132655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.937 [2024-07-25 12:40:17.132664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.132991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.132998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.133012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.133027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.133042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.133057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.133071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.133086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.133102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.133118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.133132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.938 [2024-07-25 12:40:17.133147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.938 [2024-07-25 12:40:17.133168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.938 [2024-07-25 12:40:17.133175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124040 len:8 PRP1 0x0 PRP2 0x0 00:27:54.938 [2024-07-25 12:40:17.133181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.938 [2024-07-25 12:40:17.133215] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6d05d0 was disconnected and freed. reset controller. 00:27:54.938 [2024-07-25 12:40:17.133223] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:54.938 [2024-07-25 12:40:17.133231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.938 [2024-07-25 12:40:17.136500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.938 [2024-07-25 12:40:17.136522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6aca10 (9): Bad file descriptor 00:27:54.938 [2024-07-25 12:40:17.166813] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:54.938 [2024-07-25 12:40:21.574097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:116128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.938 [2024-07-25 12:40:21.574139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.939 [2024-07-25 12:40:21.574644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.939 [2024-07-25 12:40:21.574728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.939 [2024-07-25 12:40:21.574735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.574989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.574998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.940 [2024-07-25 12:40:21.575199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.940 [2024-07-25 12:40:21.575208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.940 [2024-07-25 12:40:21.575215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.941 [2024-07-25 12:40:21.575229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.941 [2024-07-25 12:40:21.575244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.941 [2024-07-25 12:40:21.575259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.941 [2024-07-25 12:40:21.575274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.941 [2024-07-25 12:40:21.575289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.941 [2024-07-25 12:40:21.575304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.941 [2024-07-25 12:40:21.575730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.941 [2024-07-25 12:40:21.575746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.941 [2024-07-25 12:40:21.575761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.941 [2024-07-25 12:40:21.575776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.941 [2024-07-25 12:40:21.575784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.942 [2024-07-25 12:40:21.575790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.575799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.942 [2024-07-25 12:40:21.575805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.575814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.942 [2024-07-25 12:40:21.575821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.575829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.942 [2024-07-25 12:40:21.575836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.575854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.575862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116056 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.575868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.575878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.575883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.575889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116904 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.575895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.575902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.575907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.575913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116912 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.575919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.575926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.575931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.575936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116920 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.575944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.575951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.575955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.575961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116928 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.575967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.575974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.575979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.575984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116936 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.575990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.575998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.576002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.576008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116944 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.576015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.576026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.576032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116952 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.576038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.576049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.576054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116064 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.576062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.576074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.576079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116072 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.576086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.576097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.576103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116080 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.576110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.576122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.576128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116088 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.576135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.576146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.576152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116096 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.576159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.576170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.576176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116104 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.576182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.576194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.576199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116112 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.576206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.942 [2024-07-25 12:40:21.576218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.942 [2024-07-25 12:40:21.576223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116120 len:8 PRP1 0x0 PRP2 0x0 00:27:54.942 [2024-07-25 12:40:21.576230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576263] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6dc8f0 was disconnected and freed. reset controller. 00:27:54.942 [2024-07-25 12:40:21.576271] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:54.942 [2024-07-25 12:40:21.576291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.942 [2024-07-25 12:40:21.576298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.942 [2024-07-25 12:40:21.576312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.942 [2024-07-25 12:40:21.576326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.942 [2024-07-25 12:40:21.576339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.942 [2024-07-25 12:40:21.576345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.942 [2024-07-25 12:40:21.579624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.942 [2024-07-25 12:40:21.579649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6aca10 (9): Bad file descriptor 00:27:54.942 [2024-07-25 12:40:21.698809] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:54.942 00:27:54.942 Latency(us) 00:27:54.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.942 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:54.943 Verification LBA range: start 0x0 length 0x4000 00:27:54.943 NVMe0n1 : 15.05 5118.06 19.99 733.43 0.00 21786.70 759.34 49605.71 00:27:54.943 =================================================================================================================== 00:27:54.943 Total : 5118.06 19.99 733.43 0.00 21786.70 759.34 49605.71 00:27:54.943 Received shutdown signal, test time was about 15.000000 seconds 00:27:54.943 00:27:54.943 Latency(us) 00:27:54.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.943 =================================================================================================================== 00:27:54.943 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=547554 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 547554 /var/tmp/bdevperf.sock 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 547554 ']' 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:54.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.943 12:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:55.515 12:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.515 12:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:27:55.515 12:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:55.515 [2024-07-25 12:40:28.911232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:55.775 12:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:55.775 [2024-07-25 12:40:29.123876] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:55.775 12:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:56.036 NVMe0n1 00:27:56.296 12:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:56.557 00:27:56.557 12:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:56.817 00:27:56.817 12:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:56.817 12:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:56.817 12:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:57.076 12:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:28:00.371 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:00.371 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:28:00.371 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=548479 00:28:00.371 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:00.371 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 548479 00:28:01.401 0 00:28:01.401 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:01.401 [2024-07-25 12:40:27.869349] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:28:01.401 [2024-07-25 12:40:27.869407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547554 ] 00:28:01.401 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.401 [2024-07-25 12:40:27.952436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.401 [2024-07-25 12:40:28.026217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.401 [2024-07-25 12:40:30.393618] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:01.401 [2024-07-25 12:40:30.393679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.401 [2024-07-25 12:40:30.393691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.401 [2024-07-25 12:40:30.393702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.401 [2024-07-25 12:40:30.393709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.401 [2024-07-25 12:40:30.393716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.401 [2024-07-25 12:40:30.393722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.401 [2024-07-25 12:40:30.393730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.401 [2024-07-25 12:40:30.393736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.401 [2024-07-25 12:40:30.393743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:01.401 [2024-07-25 12:40:30.393774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:01.401 [2024-07-25 12:40:30.393789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbfa10 (9): Bad file descriptor 00:28:01.401 [2024-07-25 12:40:30.400741] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:01.401 Running I/O for 1 seconds... 00:28:01.401 00:28:01.401 Latency(us) 00:28:01.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.401 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:01.401 Verification LBA range: start 0x0 length 0x4000 00:28:01.401 NVMe0n1 : 1.04 3326.62 12.99 0.00 0.00 38284.11 4234.63 35086.97 00:28:01.401 =================================================================================================================== 00:28:01.401 Total : 3326.62 12.99 0.00 0.00 38284.11 4234.63 35086.97 00:28:01.401 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:01.401 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:28:01.662 12:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:01.922 12:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:28:01.922 12:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:02.182 12:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:02.443 12:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 547554 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 547554 ']' 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 547554 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 547554 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 547554' 00:28:05.739 killing process with pid 547554 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 547554 00:28:05.739 12:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 547554 00:28:05.739 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:05.739 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.998 rmmod nvme_tcp 00:28:05.998 rmmod nvme_fabrics 00:28:05.998 rmmod nvme_keyring 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 544248 ']' 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 544248 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 544248 ']' 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 544248 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 544248 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 544248' 00:28:05.998 killing process with pid 544248 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 544248 00:28:05.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 544248 00:28:06.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:06.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:06.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:06.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:06.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.799 00:28:08.799 real 0m42.084s 00:28:08.799 user 2m7.897s 00:28:08.799 sys 0m9.494s 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:08.799 ************************************ 00:28:08.799 END TEST nvmf_failover 00:28:08.799 ************************************ 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.799 ************************************ 00:28:08.799 START TEST nvmf_host_discovery 00:28:08.799 ************************************ 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:08.799 * Looking for test storage... 00:28:08.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.799 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.936 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:16.937 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:16.937 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:16.937 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:16.937 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.937 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:16.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:28:16.937 00:28:16.937 --- 10.0.0.2 ping statistics --- 00:28:16.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.937 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:28:16.937 00:28:16.937 --- 10.0.0.1 ping statistics --- 00:28:16.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.937 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=553869 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 553869 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 553869 ']' 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:16.937 12:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.199 [2024-07-25 12:40:50.387757] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:28:17.199 [2024-07-25 12:40:50.387817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.199 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.199 [2024-07-25 12:40:50.479269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.199 [2024-07-25 12:40:50.586325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.199 [2024-07-25 12:40:50.586390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.199 [2024-07-25 12:40:50.586401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.199 [2024-07-25 12:40:50.586410] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.199 [2024-07-25 12:40:50.586418] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.199 [2024-07-25 12:40:50.586450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.143 [2024-07-25 12:40:51.317837] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.143 [2024-07-25 12:40:51.330137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.143 null0 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.143 null1 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=553929 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 553929 /tmp/host.sock 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 553929 ']' 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:18.143 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:18.143 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.143 [2024-07-25 12:40:51.425614] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:28:18.143 [2024-07-25 12:40:51.425685] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid553929 ] 00:28:18.143 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.143 [2024-07-25 12:40:51.513185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.404 [2024-07-25 12:40:51.606577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:18.665 12:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.665 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.926 [2024-07-25 12:40:52.312748] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:18.926 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:28:19.187 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:28:19.758 [2024-07-25 12:40:53.006767] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:19.758 [2024-07-25 12:40:53.006799] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:19.758 [2024-07-25 12:40:53.006819] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:19.758 [2024-07-25 12:40:53.094080] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:20.019 [2024-07-25 12:40:53.199100] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:20.019 [2024-07-25 12:40:53.199134] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:20.279 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:20.541 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.803 12:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.803 [2024-07-25 12:40:54.030011] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:20.803 [2024-07-25 12:40:54.030265] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:20.803 [2024-07-25 12:40:54.030302] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:28:20.803 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:20.804 [2024-07-25 12:40:54.158905] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:20.804 12:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:28:20.804 [2024-07-25 12:40:54.221606] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:20.804 [2024-07-25 12:40:54.221634] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:20.804 [2024-07-25 12:40:54.221639] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.190 [2024-07-25 12:40:55.317760] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:22.190 [2024-07-25 12:40:55.317789] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:22.190 [2024-07-25 12:40:55.321099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.190 [2024-07-25 12:40:55.321119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.190 [2024-07-25 12:40:55.321129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.190 [2024-07-25 12:40:55.321136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.190 [2024-07-25 12:40:55.321144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.190 [2024-07-25 12:40:55.321158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.190 [2024-07-25 12:40:55.321166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.190 [2024-07-25 12:40:55.321173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.190 [2024-07-25 12:40:55.321180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc394f0 is same with the state(5) to be set 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.190 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:22.191 [2024-07-25 12:40:55.331111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc394f0 (9): Bad file descriptor 00:28:22.191 [2024-07-25 12:40:55.341152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:22.191 [2024-07-25 12:40:55.341506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.191 [2024-07-25 12:40:55.341521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc394f0 with addr=10.0.0.2, port=4420 00:28:22.191 [2024-07-25 12:40:55.341529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc394f0 is same with the state(5) to be set 00:28:22.191 [2024-07-25 12:40:55.341540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc394f0 (9): Bad file descriptor 00:28:22.191 [2024-07-25 12:40:55.341564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:22.191 [2024-07-25 12:40:55.341572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:22.191 [2024-07-25 12:40:55.341580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:22.191 [2024-07-25 12:40:55.341592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.191 [2024-07-25 12:40:55.351210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:22.191 [2024-07-25 12:40:55.351502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.191 [2024-07-25 12:40:55.351513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc394f0 with addr=10.0.0.2, port=4420 00:28:22.191 [2024-07-25 12:40:55.351519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc394f0 is same with the state(5) to be set 00:28:22.191 [2024-07-25 12:40:55.351534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc394f0 (9): Bad file descriptor 00:28:22.191 [2024-07-25 12:40:55.351544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:22.191 [2024-07-25 12:40:55.351555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:22.191 [2024-07-25 12:40:55.351562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:22.191 [2024-07-25 12:40:55.351572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.191 [2024-07-25 12:40:55.361259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:22.191 [2024-07-25 12:40:55.361485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.191 [2024-07-25 12:40:55.361497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc394f0 with addr=10.0.0.2, port=4420 00:28:22.191 [2024-07-25 12:40:55.361504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc394f0 is same with the state(5) to be set 00:28:22.191 [2024-07-25 12:40:55.361514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc394f0 (9): Bad file descriptor 00:28:22.191 [2024-07-25 12:40:55.361523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:22.191 [2024-07-25 12:40:55.361529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:22.191 [2024-07-25 12:40:55.361536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:22.191 [2024-07-25 12:40:55.361545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.191 [2024-07-25 12:40:55.371311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:22.191 [2024-07-25 12:40:55.371763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.191 [2024-07-25 12:40:55.371801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc394f0 with addr=10.0.0.2, port=4420 00:28:22.191 [2024-07-25 12:40:55.371812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc394f0 is same with the state(5) to be set 00:28:22.191 [2024-07-25 12:40:55.371830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc394f0 (9): Bad file descriptor 00:28:22.191 [2024-07-25 12:40:55.371841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:22.191 [2024-07-25 12:40:55.371847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:22.191 [2024-07-25 12:40:55.371855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:22.191 [2024-07-25 12:40:55.371869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:22.191 [2024-07-25 12:40:55.381365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:22.191 [2024-07-25 12:40:55.381818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.191 [2024-07-25 12:40:55.381856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc394f0 with addr=10.0.0.2, port=4420 00:28:22.191 [2024-07-25 12:40:55.381867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc394f0 is same with the state(5) to be set 00:28:22.191 [2024-07-25 12:40:55.381885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc394f0 (9): Bad file descriptor 00:28:22.191 [2024-07-25 12:40:55.381909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:22.191 [2024-07-25 12:40:55.381916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:22.191 [2024-07-25 12:40:55.381923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:22.191 [2024-07-25 12:40:55.381938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:22.191 [2024-07-25 12:40:55.391426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:22.191 [2024-07-25 12:40:55.391878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.191 [2024-07-25 12:40:55.391915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc394f0 with addr=10.0.0.2, port=4420 00:28:22.191 [2024-07-25 12:40:55.391927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc394f0 is same with the state(5) to be set 00:28:22.191 [2024-07-25 12:40:55.391945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc394f0 (9): Bad file descriptor 00:28:22.191 [2024-07-25 12:40:55.391969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:22.191 [2024-07-25 12:40:55.391976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:22.191 [2024-07-25 12:40:55.391983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:22.191 [2024-07-25 12:40:55.391997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.191 [2024-07-25 12:40:55.401483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:22.191 [2024-07-25 12:40:55.401815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.191 [2024-07-25 12:40:55.401828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc394f0 with addr=10.0.0.2, port=4420 00:28:22.191 [2024-07-25 12:40:55.401836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc394f0 is same with the state(5) to be set 00:28:22.191 [2024-07-25 12:40:55.401847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc394f0 (9): Bad file descriptor 00:28:22.191 [2024-07-25 12:40:55.401856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:22.191 [2024-07-25 12:40:55.401862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:22.191 [2024-07-25 12:40:55.401869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:22.191 [2024-07-25 12:40:55.401883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.191 [2024-07-25 12:40:55.405006] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:22.191 [2024-07-25 12:40:55.405022] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:22.191 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:22.192 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.395 [2024-07-25 12:40:56.776644] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:23.395 [2024-07-25 12:40:56.776660] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:23.395 [2024-07-25 12:40:56.776672] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:23.672 [2024-07-25 12:40:56.864950] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:23.934 [2024-07-25 12:40:57.137390] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:23.934 [2024-07-25 12:40:57.137418] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.934 request: 00:28:23.934 { 00:28:23.934 "name": "nvme", 00:28:23.934 "trtype": "tcp", 00:28:23.934 "traddr": "10.0.0.2", 00:28:23.934 "adrfam": "ipv4", 00:28:23.934 "trsvcid": "8009", 00:28:23.934 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:23.934 "wait_for_attach": true, 00:28:23.934 "method": "bdev_nvme_start_discovery", 00:28:23.934 "req_id": 1 00:28:23.934 } 00:28:23.934 Got JSON-RPC error response 00:28:23.934 response: 00:28:23.934 { 00:28:23.934 "code": -17, 00:28:23.934 "message": "File exists" 00:28:23.934 } 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.934 request: 00:28:23.934 { 00:28:23.934 "name": "nvme_second", 00:28:23.934 "trtype": "tcp", 00:28:23.934 "traddr": "10.0.0.2", 00:28:23.934 "adrfam": "ipv4", 00:28:23.934 "trsvcid": "8009", 00:28:23.934 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:23.934 "wait_for_attach": true, 00:28:23.934 "method": "bdev_nvme_start_discovery", 00:28:23.934 "req_id": 1 00:28:23.934 } 00:28:23.934 Got JSON-RPC error response 00:28:23.934 response: 00:28:23.934 { 00:28:23.934 "code": -17, 00:28:23.934 "message": "File exists" 00:28:23.934 } 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:28:23.934 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.935 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.196 12:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:25.135 [2024-07-25 12:40:58.400887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.135 [2024-07-25 12:40:58.400914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc548b0 with addr=10.0.0.2, port=8010 00:28:25.135 [2024-07-25 12:40:58.400925] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:25.135 [2024-07-25 12:40:58.400932] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:25.135 [2024-07-25 12:40:58.400938] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:26.075 [2024-07-25 12:40:59.403233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.075 [2024-07-25 12:40:59.403256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc548b0 with addr=10.0.0.2, port=8010 00:28:26.075 [2024-07-25 12:40:59.403267] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:26.075 [2024-07-25 12:40:59.403273] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:26.075 [2024-07-25 12:40:59.403279] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:27.016 [2024-07-25 12:41:00.405247] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:27.016 request: 00:28:27.016 { 00:28:27.016 "name": "nvme_second", 00:28:27.016 "trtype": "tcp", 00:28:27.016 "traddr": "10.0.0.2", 00:28:27.016 "adrfam": "ipv4", 00:28:27.016 "trsvcid": "8010", 00:28:27.016 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:27.016 "wait_for_attach": false, 00:28:27.016 "attach_timeout_ms": 3000, 00:28:27.016 "method": "bdev_nvme_start_discovery", 00:28:27.016 "req_id": 1 00:28:27.016 } 00:28:27.016 Got JSON-RPC error response 00:28:27.016 response: 00:28:27.016 { 00:28:27.016 "code": -110, 00:28:27.016 "message": "Connection timed out" 00:28:27.016 } 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:27.016 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 553929 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:27.276 rmmod nvme_tcp 00:28:27.276 rmmod nvme_fabrics 00:28:27.276 rmmod nvme_keyring 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 553869 ']' 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 553869 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 553869 ']' 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 553869 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 553869 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 553869' 00:28:27.276 killing process with pid 553869 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 553869 00:28:27.276 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 553869 00:28:27.537 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:27.537 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:27.537 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:27.537 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:27.537 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:27.537 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.537 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.537 12:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.445 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:29.445 00:28:29.445 real 0m21.038s 00:28:29.445 user 0m23.817s 00:28:29.445 sys 0m7.863s 00:28:29.445 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:29.445 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.445 ************************************ 00:28:29.445 END TEST nvmf_host_discovery 00:28:29.445 ************************************ 00:28:29.706 12:41:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:29.706 12:41:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:29.706 12:41:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:29.706 12:41:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.706 12:41:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.706 ************************************ 00:28:29.706 START TEST nvmf_host_multipath_status 00:28:29.706 ************************************ 00:28:29.706 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:29.706 * Looking for test storage... 00:28:29.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.706 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:28:29.707 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:37.899 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:37.899 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:37.899 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:37.899 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.899 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.900 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.900 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:37.900 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:38.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:28:38.160 00:28:38.160 --- 10.0.0.2 ping statistics --- 00:28:38.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.160 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:28:38.160 00:28:38.160 --- 10.0.0.1 ping statistics --- 00:28:38.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.160 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=559975 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 559975 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 559975 ']' 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.160 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:38.160 [2024-07-25 12:41:11.462630] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:28:38.160 [2024-07-25 12:41:11.462693] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.160 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.160 [2024-07-25 12:41:11.555294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:38.420 [2024-07-25 12:41:11.647189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.420 [2024-07-25 12:41:11.647246] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.420 [2024-07-25 12:41:11.647256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.420 [2024-07-25 12:41:11.647263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.420 [2024-07-25 12:41:11.647268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.420 [2024-07-25 12:41:11.647410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.420 [2024-07-25 12:41:11.647413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.990 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:38.990 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:28:38.990 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:38.990 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:38.990 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:38.990 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.990 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=559975 00:28:38.990 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:39.250 [2024-07-25 12:41:12.558780] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.250 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:39.510 Malloc0 00:28:39.510 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:39.769 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:40.029 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:40.287 [2024-07-25 12:41:13.493888] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.287 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:40.546 [2024-07-25 12:41:13.722533] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:40.547 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=560445 00:28:40.547 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:40.547 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:40.547 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 560445 /var/tmp/bdevperf.sock 00:28:40.547 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 560445 ']' 00:28:40.547 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:40.547 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:40.547 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:40.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:40.547 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:40.547 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:41.484 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.484 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:28:41.484 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:41.744 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:28:42.005 Nvme0n1 00:28:42.266 12:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:42.526 Nvme0n1 00:28:42.526 12:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:42.526 12:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:44.432 12:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:44.432 12:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:44.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:44.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:45.887 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:45.887 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:45.887 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.887 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:46.146 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.146 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:46.146 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.146 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:46.405 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:46.405 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:46.405 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.405 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:46.665 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.665 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:46.665 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:46.665 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.665 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.665 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:46.665 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.665 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:46.926 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.926 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:46.926 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.926 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:47.185 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:47.186 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:47.186 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:47.445 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:47.704 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:48.638 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:48.638 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:48.638 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.638 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:48.898 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:48.898 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:48.898 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.898 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:49.158 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.158 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:49.158 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.158 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:49.158 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.158 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:49.158 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.158 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:49.418 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.418 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:49.418 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.418 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:49.678 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.678 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:49.678 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.678 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:49.938 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.938 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:49.938 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:50.197 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:50.455 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:51.394 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:51.394 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:51.394 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.394 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:51.654 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.654 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:51.654 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.654 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:51.914 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:51.914 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:51.914 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:51.914 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.914 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.914 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:51.914 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.914 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:52.175 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.175 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:52.175 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.175 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:52.435 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.435 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:52.435 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.435 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:52.696 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.696 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:52.696 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:52.955 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:53.216 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:54.157 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:54.157 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:54.157 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.157 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:54.418 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.418 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:54.418 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:54.418 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.678 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:54.678 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:54.678 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.678 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:54.678 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.678 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:54.678 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.678 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:54.938 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.938 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:54.938 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.938 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:55.199 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:55.199 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:55.199 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:55.199 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:55.459 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:55.460 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:55.460 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:55.720 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:55.720 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.105 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:57.365 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:57.365 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:57.365 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.365 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:57.624 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:57.624 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:57.624 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.624 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:57.884 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:57.884 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:57.884 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.884 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:58.144 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:58.144 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:58.144 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:58.144 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:58.403 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:59.816 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:59.816 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:59.816 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.816 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:59.816 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:59.816 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:59.816 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.816 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:59.816 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:59.816 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:59.816 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.816 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:00.084 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:00.084 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:00.084 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:00.084 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:00.343 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:00.343 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:00.343 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:00.343 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:00.603 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:00.603 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:00.603 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:00.603 12:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:00.864 12:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:00.864 12:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:29:00.864 12:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:29:00.864 12:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:01.124 12:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:01.384 12:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:29:02.323 12:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:29:02.323 12:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:02.323 12:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:02.323 12:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:02.583 12:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:02.583 12:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:02.583 12:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:02.583 12:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:02.842 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:02.842 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:02.842 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:02.842 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:02.842 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:02.842 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:02.843 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:02.843 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:03.103 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:03.103 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:03.103 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:03.103 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:03.364 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:03.364 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:03.364 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:03.364 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:03.624 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:03.624 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:29:03.624 12:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:03.624 12:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:03.884 12:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:05.266 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:05.526 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:05.526 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:05.526 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:05.526 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:05.786 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:05.786 12:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:05.786 12:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:05.786 12:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:05.786 12:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:05.786 12:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:05.786 12:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:05.786 12:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:06.046 12:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:06.046 12:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:29:06.046 12:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:06.307 12:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:06.568 12:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:29:07.509 12:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:29:07.509 12:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:07.509 12:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:07.509 12:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:07.770 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:07.770 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:07.770 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:07.770 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:08.030 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:08.030 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:08.030 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:08.030 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:08.290 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:08.290 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:08.290 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:08.290 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:08.290 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:08.290 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:08.290 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:08.290 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:08.550 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:08.550 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:08.550 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:08.550 12:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:08.810 12:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:08.810 12:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:29:08.810 12:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:09.070 12:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:09.330 12:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:29:10.269 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:29:10.269 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:10.269 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:10.269 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:10.529 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:10.529 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:10.529 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:10.529 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:10.529 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:10.529 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:10.529 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:10.529 12:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:10.789 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:10.789 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:10.789 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:10.789 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:11.049 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:11.049 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:11.049 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:11.049 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:11.309 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:11.309 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:11.309 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:11.309 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 560445 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 560445 ']' 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 560445 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 560445 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 560445' 00:29:11.569 killing process with pid 560445 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 560445 00:29:11.569 12:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 560445 00:29:11.569 Connection closed with partial response: 00:29:11.569 00:29:11.569 00:29:11.833 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 560445 00:29:11.833 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:11.833 [2024-07-25 12:41:13.812151] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:29:11.833 [2024-07-25 12:41:13.812225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid560445 ] 00:29:11.833 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.833 [2024-07-25 12:41:13.946692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.833 [2024-07-25 12:41:14.108399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.833 Running I/O for 90 seconds... 00:29:11.833 [2024-07-25 12:41:28.909692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.833 [2024-07-25 12:41:28.909777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.909869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.909897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.909942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.909965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.833 [2024-07-25 12:41:28.910927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:11.833 [2024-07-25 12:41:28.910971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.910993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.911843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.911866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.913286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.834 [2024-07-25 12:41:28.913325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.913382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.913404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.913458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.913480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.913532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.913568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.913628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.913651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.913703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.913726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.913780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.913803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.913856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.913879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.913933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.913956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.914933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.914956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.915009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.834 [2024-07-25 12:41:28.915032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:11.834 [2024-07-25 12:41:28.915084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.915106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.915158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.835 [2024-07-25 12:41:28.915182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.915234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.915256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.915308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.915329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.915382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.915404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.915456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.915490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.915543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.915575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.915627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.915650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.915703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.915725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.915776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.915799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.915854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.915878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.915930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.915953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.916929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.916951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.917026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.917102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.917177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.917252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.917326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.917405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.917481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.835 [2024-07-25 12:41:28.917566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.835 [2024-07-25 12:41:28.917643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.835 [2024-07-25 12:41:28.917716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.835 [2024-07-25 12:41:28.917791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:11.835 [2024-07-25 12:41:28.917842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.835 [2024-07-25 12:41:28.917865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.917917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.917939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.917991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.918013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.918066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.918088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.918583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.918614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.918686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.918709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.918786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.918809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.918879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.918901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.918969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.918991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.919059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.919083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.919151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.919173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.919244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.919266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.919334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.919357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.919424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.919447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.919516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.919538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.919617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.919640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.919709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.919732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.919800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.919823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.919891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.919918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.919987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.920010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.920079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.920102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.920170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.920192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.920260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.920283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.920352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.920374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.920443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.920465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.920533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.920562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.920631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.920654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.920722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.920746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.920814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.920837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.920905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.920928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.920997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.921024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.921093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.921116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.921186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.921209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.921276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.921299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.921367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.921391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.921459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.921482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.921559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.921583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:28.921653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.836 [2024-07-25 12:41:28.921676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:11.836 [2024-07-25 12:41:42.484926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.485006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.485125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.485194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.485262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.485327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.485404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.485470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.485536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.485616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.485682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.485747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.485815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.485882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.485947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.485991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.486014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.486078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.486145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.486225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.486291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.486359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.486425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.486491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.486565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.486631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.486697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.486762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.486827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.486894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.486938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.486959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.487003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.487029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.487075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.487097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.492541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.492597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.492646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.492668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.492712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.492733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.492776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.492799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.492843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.837 [2024-07-25 12:41:42.492863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.492907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.837 [2024-07-25 12:41:42.492929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:11.837 [2024-07-25 12:41:42.492973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.838 [2024-07-25 12:41:42.492994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:11.838 [2024-07-25 12:41:42.493038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.838 [2024-07-25 12:41:42.493060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.838 [2024-07-25 12:41:42.493103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.838 [2024-07-25 12:41:42.493125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:11.838 [2024-07-25 12:41:42.493170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.838 [2024-07-25 12:41:42.493191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:11.838 [2024-07-25 12:41:42.493235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.838 [2024-07-25 12:41:42.493265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:11.838 [2024-07-25 12:41:42.493308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.838 [2024-07-25 12:41:42.493331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:11.838 [2024-07-25 12:41:42.493376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.838 [2024-07-25 12:41:42.493398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:11.838 Received shutdown signal, test time was about 28.856584 seconds 00:29:11.838 00:29:11.838 Latency(us) 00:29:11.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.838 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:11.838 Verification LBA range: start 0x0 length 0x4000 00:29:11.838 Nvme0n1 : 28.85 4170.67 16.29 0.00 0.00 30599.90 2369.38 3032804.43 00:29:11.838 =================================================================================================================== 00:29:11.838 Total : 4170.67 16.29 0.00 0.00 30599.90 2369.38 3032804.43 00:29:11.838 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:12.098 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:29:12.098 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:12.098 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:29:12.098 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:12.098 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:12.099 rmmod nvme_tcp 00:29:12.099 rmmod nvme_fabrics 00:29:12.099 rmmod nvme_keyring 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 559975 ']' 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 559975 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 559975 ']' 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 559975 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 559975 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 559975' 00:29:12.099 killing process with pid 559975 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 559975 00:29:12.099 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 559975 00:29:12.359 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:12.359 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:12.359 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:12.359 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:12.359 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:12.359 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.359 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.359 12:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.268 12:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:14.268 00:29:14.268 real 0m44.742s 00:29:14.268 user 1m57.129s 00:29:14.268 sys 0m11.910s 00:29:14.268 12:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:14.268 12:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:14.268 ************************************ 00:29:14.268 END TEST nvmf_host_multipath_status 00:29:14.268 ************************************ 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.528 ************************************ 00:29:14.528 START TEST nvmf_discovery_remove_ifc 00:29:14.528 ************************************ 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:14.528 * Looking for test storage... 00:29:14.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:29:14.528 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:22.664 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:22.664 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:22.664 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:22.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.664 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:22.665 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.665 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.665 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:22.665 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.665 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.665 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:22.665 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:22.665 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.665 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.665 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.665 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.925 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:22.925 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.925 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.925 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.925 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:22.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:29:22.925 00:29:22.925 --- 10.0.0.2 ping statistics --- 00:29:22.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.925 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:29:22.925 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:29:22.925 00:29:22.925 --- 10.0.0.1 ping statistics --- 00:29:22.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.925 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:29:22.925 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.925 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=570008 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 570008 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 570008 ']' 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:22.926 12:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:22.926 [2024-07-25 12:41:56.325339] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:29:22.926 [2024-07-25 12:41:56.325401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.186 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.186 [2024-07-25 12:41:56.413328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.186 [2024-07-25 12:41:56.519210] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.186 [2024-07-25 12:41:56.519276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.186 [2024-07-25 12:41:56.519286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.186 [2024-07-25 12:41:56.519296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.186 [2024-07-25 12:41:56.519303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.186 [2024-07-25 12:41:56.519343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.128 [2024-07-25 12:41:57.246960] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.128 [2024-07-25 12:41:57.255184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:24.128 null0 00:29:24.128 [2024-07-25 12:41:57.287157] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=570304 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 570304 /tmp/host.sock 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 570304 ']' 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:24.128 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:24.128 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.128 [2024-07-25 12:41:57.369891] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:29:24.128 [2024-07-25 12:41:57.369955] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid570304 ] 00:29:24.128 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.128 [2024-07-25 12:41:57.455351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.389 [2024-07-25 12:41:57.548253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.961 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:24.961 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:29:24.961 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:24.962 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:24.962 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.962 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.962 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.962 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:24.962 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.962 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.962 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.962 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:24.962 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.962 12:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:26.346 [2024-07-25 12:41:59.376787] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:26.346 [2024-07-25 12:41:59.376821] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:26.346 [2024-07-25 12:41:59.376836] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:26.346 [2024-07-25 12:41:59.465099] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:26.346 [2024-07-25 12:41:59.693654] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:26.346 [2024-07-25 12:41:59.693723] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:26.346 [2024-07-25 12:41:59.693747] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:26.346 [2024-07-25 12:41:59.693765] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:26.346 [2024-07-25 12:41:59.693788] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:26.346 [2024-07-25 12:41:59.696418] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x13c6250 was disconnected and freed. delete nvme_qpair. 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:26.346 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:26.607 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:26.607 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:26.607 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:26.607 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:26.607 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.607 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:26.607 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:26.607 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:26.607 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.607 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:26.607 12:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:27.551 12:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:27.551 12:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:27.551 12:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:27.551 12:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.551 12:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:27.551 12:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:27.551 12:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:27.812 12:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.812 12:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:27.812 12:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:28.823 12:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:28.823 12:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:28.823 12:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:28.823 12:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.823 12:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:28.823 12:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:28.823 12:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:28.823 12:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.823 12:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:28.823 12:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:29.838 12:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:29.838 12:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:29.838 12:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:29.838 12:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.838 12:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:29.838 12:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:29.838 12:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:29.838 12:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.838 12:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:29.838 12:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:30.778 12:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:30.778 12:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:30.778 12:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:30.778 12:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:30.778 12:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.778 12:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:30.778 12:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:30.778 12:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.778 12:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:30.778 12:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:31.720 [2024-07-25 12:42:05.133572] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:31.720 [2024-07-25 12:42:05.133617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.720 [2024-07-25 12:42:05.133630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.720 [2024-07-25 12:42:05.133640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.720 [2024-07-25 12:42:05.133647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.720 [2024-07-25 12:42:05.133655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.720 [2024-07-25 12:42:05.133662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.720 [2024-07-25 12:42:05.133669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.720 [2024-07-25 12:42:05.133675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.720 [2024-07-25 12:42:05.133683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.720 [2024-07-25 12:42:05.133690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.720 [2024-07-25 12:42:05.133696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cc90 is same with the state(5) to be set 00:29:31.982 [2024-07-25 12:42:05.143591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138cc90 (9): Bad file descriptor 00:29:31.982 [2024-07-25 12:42:05.153632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:31.982 12:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:31.982 12:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:31.982 12:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:31.982 12:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:31.982 12:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.982 12:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:31.982 12:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:32.924 [2024-07-25 12:42:06.209669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:32.924 [2024-07-25 12:42:06.209772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138cc90 with addr=10.0.0.2, port=4420 00:29:32.924 [2024-07-25 12:42:06.209805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138cc90 is same with the state(5) to be set 00:29:32.924 [2024-07-25 12:42:06.209868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138cc90 (9): Bad file descriptor 00:29:32.924 [2024-07-25 12:42:06.210991] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.924 [2024-07-25 12:42:06.211064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:32.924 [2024-07-25 12:42:06.211090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:32.924 [2024-07-25 12:42:06.211114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:32.924 [2024-07-25 12:42:06.211177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.924 [2024-07-25 12:42:06.211205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:32.924 12:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.924 12:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:32.924 12:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:33.867 [2024-07-25 12:42:07.213612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:33.867 [2024-07-25 12:42:07.213634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:33.867 [2024-07-25 12:42:07.213642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:33.867 [2024-07-25 12:42:07.213649] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:29:33.867 [2024-07-25 12:42:07.213661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.867 [2024-07-25 12:42:07.213679] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:33.867 [2024-07-25 12:42:07.213701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.867 [2024-07-25 12:42:07.213711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.867 [2024-07-25 12:42:07.213720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.867 [2024-07-25 12:42:07.213727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.867 [2024-07-25 12:42:07.213736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.867 [2024-07-25 12:42:07.213749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.867 [2024-07-25 12:42:07.213757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.867 [2024-07-25 12:42:07.213764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.867 [2024-07-25 12:42:07.213771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.867 [2024-07-25 12:42:07.213778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.867 [2024-07-25 12:42:07.213785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:33.867 [2024-07-25 12:42:07.214392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138c0f0 (9): Bad file descriptor 00:29:33.867 [2024-07-25 12:42:07.215405] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:33.867 [2024-07-25 12:42:07.215416] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:29:33.867 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:33.867 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.867 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:33.867 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.867 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:33.867 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:33.867 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:33.867 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:34.128 12:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:35.068 12:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:35.068 12:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:35.068 12:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:35.068 12:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.068 12:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:35.068 12:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:35.068 12:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:35.068 12:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.329 12:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:35.329 12:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:35.900 [2024-07-25 12:42:09.273646] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:35.900 [2024-07-25 12:42:09.273665] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:35.900 [2024-07-25 12:42:09.273676] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:36.171 [2024-07-25 12:42:09.361951] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:36.171 12:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:36.171 12:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.171 12:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:36.171 12:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.171 12:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:36.171 12:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:36.171 12:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:36.171 12:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.171 [2024-07-25 12:42:09.548024] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:36.171 [2024-07-25 12:42:09.548058] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:36.171 [2024-07-25 12:42:09.548076] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:36.171 [2024-07-25 12:42:09.548089] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:36.171 [2024-07-25 12:42:09.548096] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:36.171 [2024-07-25 12:42:09.551019] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x13cfcc0 was disconnected and freed. delete nvme_qpair. 00:29:36.171 12:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:36.171 12:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 570304 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 570304 ']' 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 570304 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 570304 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 570304' 00:29:37.556 killing process with pid 570304 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 570304 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 570304 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:29:37.556 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:37.557 rmmod nvme_tcp 00:29:37.557 rmmod nvme_fabrics 00:29:37.557 rmmod nvme_keyring 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 570008 ']' 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 570008 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 570008 ']' 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 570008 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 570008 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 570008' 00:29:37.557 killing process with pid 570008 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 570008 00:29:37.557 12:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 570008 00:29:37.818 12:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:37.818 12:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:37.818 12:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:37.818 12:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.818 12:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:37.818 12:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.818 12:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.818 12:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:40.363 00:29:40.363 real 0m25.467s 00:29:40.363 user 0m30.215s 00:29:40.363 sys 0m7.638s 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:40.363 ************************************ 00:29:40.363 END TEST nvmf_discovery_remove_ifc 00:29:40.363 ************************************ 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.363 ************************************ 00:29:40.363 START TEST nvmf_identify_kernel_target 00:29:40.363 ************************************ 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:40.363 * Looking for test storage... 00:29:40.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:29:40.363 12:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.503 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.503 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:29:48.503 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:48.503 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:48.504 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:48.504 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:48.504 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:48.504 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:48.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:29:48.504 00:29:48.504 --- 10.0.0.2 ping statistics --- 00:29:48.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.504 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:29:48.504 00:29:48.504 --- 10.0.0.1 ping statistics --- 00:29:48.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.504 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:29:48.504 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:48.505 12:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:52.705 Waiting for block devices as requested 00:29:52.706 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:52.706 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:52.706 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:52.706 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:52.965 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:52.965 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:52.966 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:52.966 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:53.226 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:29:53.226 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:53.485 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:53.485 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:53.485 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:53.745 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:53.745 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:53.745 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:54.006 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:54.006 No valid GPT data, bailing 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:29:54.006 00:29:54.006 Discovery Log Number of Records 2, Generation counter 2 00:29:54.006 =====Discovery Log Entry 0====== 00:29:54.006 trtype: tcp 00:29:54.006 adrfam: ipv4 00:29:54.006 subtype: current discovery subsystem 00:29:54.006 treq: not specified, sq flow control disable supported 00:29:54.006 portid: 1 00:29:54.006 trsvcid: 4420 00:29:54.006 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:54.006 traddr: 10.0.0.1 00:29:54.006 eflags: none 00:29:54.006 sectype: none 00:29:54.006 =====Discovery Log Entry 1====== 00:29:54.006 trtype: tcp 00:29:54.006 adrfam: ipv4 00:29:54.006 subtype: nvme subsystem 00:29:54.006 treq: not specified, sq flow control disable supported 00:29:54.006 portid: 1 00:29:54.006 trsvcid: 4420 00:29:54.006 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:54.006 traddr: 10.0.0.1 00:29:54.006 eflags: none 00:29:54.006 sectype: none 00:29:54.006 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:54.006 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:54.006 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.006 ===================================================== 00:29:54.006 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:54.006 ===================================================== 00:29:54.006 Controller Capabilities/Features 00:29:54.006 ================================ 00:29:54.006 Vendor ID: 0000 00:29:54.006 Subsystem Vendor ID: 0000 00:29:54.006 Serial Number: f06c7a94241027d60491 00:29:54.006 Model Number: Linux 00:29:54.006 Firmware Version: 6.7.0-68 00:29:54.006 Recommended Arb Burst: 0 00:29:54.006 IEEE OUI Identifier: 00 00 00 00:29:54.006 Multi-path I/O 00:29:54.006 May have multiple subsystem ports: No 00:29:54.006 May have multiple controllers: No 00:29:54.006 Associated with SR-IOV VF: No 00:29:54.006 Max Data Transfer Size: Unlimited 00:29:54.006 Max Number of Namespaces: 0 00:29:54.006 Max Number of I/O Queues: 1024 00:29:54.006 NVMe Specification Version (VS): 1.3 00:29:54.006 NVMe Specification Version (Identify): 1.3 00:29:54.006 Maximum Queue Entries: 1024 00:29:54.006 Contiguous Queues Required: No 00:29:54.006 Arbitration Mechanisms Supported 00:29:54.006 Weighted Round Robin: Not Supported 00:29:54.006 Vendor Specific: Not Supported 00:29:54.006 Reset Timeout: 7500 ms 00:29:54.006 Doorbell Stride: 4 bytes 00:29:54.006 NVM Subsystem Reset: Not Supported 00:29:54.006 Command Sets Supported 00:29:54.006 NVM Command Set: Supported 00:29:54.006 Boot Partition: Not Supported 00:29:54.006 Memory Page Size Minimum: 4096 bytes 00:29:54.006 Memory Page Size Maximum: 4096 bytes 00:29:54.007 Persistent Memory Region: Not Supported 00:29:54.007 Optional Asynchronous Events Supported 00:29:54.007 Namespace Attribute Notices: Not Supported 00:29:54.007 Firmware Activation Notices: Not Supported 00:29:54.007 ANA Change Notices: Not Supported 00:29:54.007 PLE Aggregate Log Change Notices: Not Supported 00:29:54.007 LBA Status Info Alert Notices: Not Supported 00:29:54.007 EGE Aggregate Log Change Notices: Not Supported 00:29:54.007 Normal NVM Subsystem Shutdown event: Not Supported 00:29:54.007 Zone Descriptor Change Notices: Not Supported 00:29:54.007 Discovery Log Change Notices: Supported 00:29:54.007 Controller Attributes 00:29:54.007 128-bit Host Identifier: Not Supported 00:29:54.007 Non-Operational Permissive Mode: Not Supported 00:29:54.007 NVM Sets: Not Supported 00:29:54.007 Read Recovery Levels: Not Supported 00:29:54.007 Endurance Groups: Not Supported 00:29:54.007 Predictable Latency Mode: Not Supported 00:29:54.007 Traffic Based Keep ALive: Not Supported 00:29:54.007 Namespace Granularity: Not Supported 00:29:54.007 SQ Associations: Not Supported 00:29:54.007 UUID List: Not Supported 00:29:54.007 Multi-Domain Subsystem: Not Supported 00:29:54.007 Fixed Capacity Management: Not Supported 00:29:54.007 Variable Capacity Management: Not Supported 00:29:54.007 Delete Endurance Group: Not Supported 00:29:54.007 Delete NVM Set: Not Supported 00:29:54.007 Extended LBA Formats Supported: Not Supported 00:29:54.007 Flexible Data Placement Supported: Not Supported 00:29:54.007 00:29:54.007 Controller Memory Buffer Support 00:29:54.007 ================================ 00:29:54.007 Supported: No 00:29:54.007 00:29:54.007 Persistent Memory Region Support 00:29:54.007 ================================ 00:29:54.007 Supported: No 00:29:54.007 00:29:54.007 Admin Command Set Attributes 00:29:54.007 ============================ 00:29:54.007 Security Send/Receive: Not Supported 00:29:54.007 Format NVM: Not Supported 00:29:54.007 Firmware Activate/Download: Not Supported 00:29:54.007 Namespace Management: Not Supported 00:29:54.007 Device Self-Test: Not Supported 00:29:54.007 Directives: Not Supported 00:29:54.007 NVMe-MI: Not Supported 00:29:54.007 Virtualization Management: Not Supported 00:29:54.007 Doorbell Buffer Config: Not Supported 00:29:54.007 Get LBA Status Capability: Not Supported 00:29:54.007 Command & Feature Lockdown Capability: Not Supported 00:29:54.007 Abort Command Limit: 1 00:29:54.007 Async Event Request Limit: 1 00:29:54.007 Number of Firmware Slots: N/A 00:29:54.007 Firmware Slot 1 Read-Only: N/A 00:29:54.269 Firmware Activation Without Reset: N/A 00:29:54.269 Multiple Update Detection Support: N/A 00:29:54.269 Firmware Update Granularity: No Information Provided 00:29:54.269 Per-Namespace SMART Log: No 00:29:54.269 Asymmetric Namespace Access Log Page: Not Supported 00:29:54.269 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:54.269 Command Effects Log Page: Not Supported 00:29:54.269 Get Log Page Extended Data: Supported 00:29:54.269 Telemetry Log Pages: Not Supported 00:29:54.269 Persistent Event Log Pages: Not Supported 00:29:54.269 Supported Log Pages Log Page: May Support 00:29:54.269 Commands Supported & Effects Log Page: Not Supported 00:29:54.269 Feature Identifiers & Effects Log Page:May Support 00:29:54.269 NVMe-MI Commands & Effects Log Page: May Support 00:29:54.269 Data Area 4 for Telemetry Log: Not Supported 00:29:54.269 Error Log Page Entries Supported: 1 00:29:54.269 Keep Alive: Not Supported 00:29:54.269 00:29:54.269 NVM Command Set Attributes 00:29:54.269 ========================== 00:29:54.269 Submission Queue Entry Size 00:29:54.269 Max: 1 00:29:54.269 Min: 1 00:29:54.269 Completion Queue Entry Size 00:29:54.269 Max: 1 00:29:54.269 Min: 1 00:29:54.269 Number of Namespaces: 0 00:29:54.269 Compare Command: Not Supported 00:29:54.269 Write Uncorrectable Command: Not Supported 00:29:54.269 Dataset Management Command: Not Supported 00:29:54.269 Write Zeroes Command: Not Supported 00:29:54.269 Set Features Save Field: Not Supported 00:29:54.269 Reservations: Not Supported 00:29:54.269 Timestamp: Not Supported 00:29:54.269 Copy: Not Supported 00:29:54.269 Volatile Write Cache: Not Present 00:29:54.269 Atomic Write Unit (Normal): 1 00:29:54.269 Atomic Write Unit (PFail): 1 00:29:54.269 Atomic Compare & Write Unit: 1 00:29:54.269 Fused Compare & Write: Not Supported 00:29:54.269 Scatter-Gather List 00:29:54.269 SGL Command Set: Supported 00:29:54.269 SGL Keyed: Not Supported 00:29:54.269 SGL Bit Bucket Descriptor: Not Supported 00:29:54.269 SGL Metadata Pointer: Not Supported 00:29:54.269 Oversized SGL: Not Supported 00:29:54.269 SGL Metadata Address: Not Supported 00:29:54.269 SGL Offset: Supported 00:29:54.269 Transport SGL Data Block: Not Supported 00:29:54.269 Replay Protected Memory Block: Not Supported 00:29:54.269 00:29:54.269 Firmware Slot Information 00:29:54.269 ========================= 00:29:54.269 Active slot: 0 00:29:54.269 00:29:54.269 00:29:54.269 Error Log 00:29:54.269 ========= 00:29:54.269 00:29:54.269 Active Namespaces 00:29:54.269 ================= 00:29:54.269 Discovery Log Page 00:29:54.269 ================== 00:29:54.269 Generation Counter: 2 00:29:54.269 Number of Records: 2 00:29:54.269 Record Format: 0 00:29:54.269 00:29:54.269 Discovery Log Entry 0 00:29:54.269 ---------------------- 00:29:54.269 Transport Type: 3 (TCP) 00:29:54.269 Address Family: 1 (IPv4) 00:29:54.269 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:54.269 Entry Flags: 00:29:54.269 Duplicate Returned Information: 0 00:29:54.269 Explicit Persistent Connection Support for Discovery: 0 00:29:54.269 Transport Requirements: 00:29:54.269 Secure Channel: Not Specified 00:29:54.269 Port ID: 1 (0x0001) 00:29:54.269 Controller ID: 65535 (0xffff) 00:29:54.269 Admin Max SQ Size: 32 00:29:54.269 Transport Service Identifier: 4420 00:29:54.269 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:54.269 Transport Address: 10.0.0.1 00:29:54.269 Discovery Log Entry 1 00:29:54.269 ---------------------- 00:29:54.269 Transport Type: 3 (TCP) 00:29:54.269 Address Family: 1 (IPv4) 00:29:54.269 Subsystem Type: 2 (NVM Subsystem) 00:29:54.269 Entry Flags: 00:29:54.269 Duplicate Returned Information: 0 00:29:54.269 Explicit Persistent Connection Support for Discovery: 0 00:29:54.269 Transport Requirements: 00:29:54.269 Secure Channel: Not Specified 00:29:54.269 Port ID: 1 (0x0001) 00:29:54.269 Controller ID: 65535 (0xffff) 00:29:54.269 Admin Max SQ Size: 32 00:29:54.269 Transport Service Identifier: 4420 00:29:54.269 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:54.269 Transport Address: 10.0.0.1 00:29:54.269 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:54.269 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.269 get_feature(0x01) failed 00:29:54.269 get_feature(0x02) failed 00:29:54.269 get_feature(0x04) failed 00:29:54.269 ===================================================== 00:29:54.269 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:54.269 ===================================================== 00:29:54.269 Controller Capabilities/Features 00:29:54.269 ================================ 00:29:54.269 Vendor ID: 0000 00:29:54.269 Subsystem Vendor ID: 0000 00:29:54.269 Serial Number: f93fcaaf5675edacfe05 00:29:54.269 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:54.269 Firmware Version: 6.7.0-68 00:29:54.269 Recommended Arb Burst: 6 00:29:54.269 IEEE OUI Identifier: 00 00 00 00:29:54.269 Multi-path I/O 00:29:54.269 May have multiple subsystem ports: Yes 00:29:54.269 May have multiple controllers: Yes 00:29:54.269 Associated with SR-IOV VF: No 00:29:54.269 Max Data Transfer Size: Unlimited 00:29:54.269 Max Number of Namespaces: 1024 00:29:54.269 Max Number of I/O Queues: 128 00:29:54.269 NVMe Specification Version (VS): 1.3 00:29:54.269 NVMe Specification Version (Identify): 1.3 00:29:54.269 Maximum Queue Entries: 1024 00:29:54.269 Contiguous Queues Required: No 00:29:54.269 Arbitration Mechanisms Supported 00:29:54.269 Weighted Round Robin: Not Supported 00:29:54.269 Vendor Specific: Not Supported 00:29:54.269 Reset Timeout: 7500 ms 00:29:54.269 Doorbell Stride: 4 bytes 00:29:54.269 NVM Subsystem Reset: Not Supported 00:29:54.269 Command Sets Supported 00:29:54.269 NVM Command Set: Supported 00:29:54.269 Boot Partition: Not Supported 00:29:54.269 Memory Page Size Minimum: 4096 bytes 00:29:54.269 Memory Page Size Maximum: 4096 bytes 00:29:54.269 Persistent Memory Region: Not Supported 00:29:54.269 Optional Asynchronous Events Supported 00:29:54.269 Namespace Attribute Notices: Supported 00:29:54.269 Firmware Activation Notices: Not Supported 00:29:54.269 ANA Change Notices: Supported 00:29:54.269 PLE Aggregate Log Change Notices: Not Supported 00:29:54.269 LBA Status Info Alert Notices: Not Supported 00:29:54.269 EGE Aggregate Log Change Notices: Not Supported 00:29:54.270 Normal NVM Subsystem Shutdown event: Not Supported 00:29:54.270 Zone Descriptor Change Notices: Not Supported 00:29:54.270 Discovery Log Change Notices: Not Supported 00:29:54.270 Controller Attributes 00:29:54.270 128-bit Host Identifier: Supported 00:29:54.270 Non-Operational Permissive Mode: Not Supported 00:29:54.270 NVM Sets: Not Supported 00:29:54.270 Read Recovery Levels: Not Supported 00:29:54.270 Endurance Groups: Not Supported 00:29:54.270 Predictable Latency Mode: Not Supported 00:29:54.270 Traffic Based Keep ALive: Supported 00:29:54.270 Namespace Granularity: Not Supported 00:29:54.270 SQ Associations: Not Supported 00:29:54.270 UUID List: Not Supported 00:29:54.270 Multi-Domain Subsystem: Not Supported 00:29:54.270 Fixed Capacity Management: Not Supported 00:29:54.270 Variable Capacity Management: Not Supported 00:29:54.270 Delete Endurance Group: Not Supported 00:29:54.270 Delete NVM Set: Not Supported 00:29:54.270 Extended LBA Formats Supported: Not Supported 00:29:54.270 Flexible Data Placement Supported: Not Supported 00:29:54.270 00:29:54.270 Controller Memory Buffer Support 00:29:54.270 ================================ 00:29:54.270 Supported: No 00:29:54.270 00:29:54.270 Persistent Memory Region Support 00:29:54.270 ================================ 00:29:54.270 Supported: No 00:29:54.270 00:29:54.270 Admin Command Set Attributes 00:29:54.270 ============================ 00:29:54.270 Security Send/Receive: Not Supported 00:29:54.270 Format NVM: Not Supported 00:29:54.270 Firmware Activate/Download: Not Supported 00:29:54.270 Namespace Management: Not Supported 00:29:54.270 Device Self-Test: Not Supported 00:29:54.270 Directives: Not Supported 00:29:54.270 NVMe-MI: Not Supported 00:29:54.270 Virtualization Management: Not Supported 00:29:54.270 Doorbell Buffer Config: Not Supported 00:29:54.270 Get LBA Status Capability: Not Supported 00:29:54.270 Command & Feature Lockdown Capability: Not Supported 00:29:54.270 Abort Command Limit: 4 00:29:54.270 Async Event Request Limit: 4 00:29:54.270 Number of Firmware Slots: N/A 00:29:54.270 Firmware Slot 1 Read-Only: N/A 00:29:54.270 Firmware Activation Without Reset: N/A 00:29:54.270 Multiple Update Detection Support: N/A 00:29:54.270 Firmware Update Granularity: No Information Provided 00:29:54.270 Per-Namespace SMART Log: Yes 00:29:54.270 Asymmetric Namespace Access Log Page: Supported 00:29:54.270 ANA Transition Time : 10 sec 00:29:54.270 00:29:54.270 Asymmetric Namespace Access Capabilities 00:29:54.270 ANA Optimized State : Supported 00:29:54.270 ANA Non-Optimized State : Supported 00:29:54.270 ANA Inaccessible State : Supported 00:29:54.270 ANA Persistent Loss State : Supported 00:29:54.270 ANA Change State : Supported 00:29:54.270 ANAGRPID is not changed : No 00:29:54.270 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:54.270 00:29:54.270 ANA Group Identifier Maximum : 128 00:29:54.270 Number of ANA Group Identifiers : 128 00:29:54.270 Max Number of Allowed Namespaces : 1024 00:29:54.270 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:54.270 Command Effects Log Page: Supported 00:29:54.270 Get Log Page Extended Data: Supported 00:29:54.270 Telemetry Log Pages: Not Supported 00:29:54.270 Persistent Event Log Pages: Not Supported 00:29:54.270 Supported Log Pages Log Page: May Support 00:29:54.270 Commands Supported & Effects Log Page: Not Supported 00:29:54.270 Feature Identifiers & Effects Log Page:May Support 00:29:54.270 NVMe-MI Commands & Effects Log Page: May Support 00:29:54.270 Data Area 4 for Telemetry Log: Not Supported 00:29:54.270 Error Log Page Entries Supported: 128 00:29:54.270 Keep Alive: Supported 00:29:54.270 Keep Alive Granularity: 1000 ms 00:29:54.270 00:29:54.270 NVM Command Set Attributes 00:29:54.270 ========================== 00:29:54.270 Submission Queue Entry Size 00:29:54.270 Max: 64 00:29:54.270 Min: 64 00:29:54.270 Completion Queue Entry Size 00:29:54.270 Max: 16 00:29:54.270 Min: 16 00:29:54.270 Number of Namespaces: 1024 00:29:54.270 Compare Command: Not Supported 00:29:54.270 Write Uncorrectable Command: Not Supported 00:29:54.270 Dataset Management Command: Supported 00:29:54.270 Write Zeroes Command: Supported 00:29:54.270 Set Features Save Field: Not Supported 00:29:54.270 Reservations: Not Supported 00:29:54.270 Timestamp: Not Supported 00:29:54.270 Copy: Not Supported 00:29:54.270 Volatile Write Cache: Present 00:29:54.270 Atomic Write Unit (Normal): 1 00:29:54.270 Atomic Write Unit (PFail): 1 00:29:54.270 Atomic Compare & Write Unit: 1 00:29:54.270 Fused Compare & Write: Not Supported 00:29:54.270 Scatter-Gather List 00:29:54.270 SGL Command Set: Supported 00:29:54.270 SGL Keyed: Not Supported 00:29:54.270 SGL Bit Bucket Descriptor: Not Supported 00:29:54.270 SGL Metadata Pointer: Not Supported 00:29:54.270 Oversized SGL: Not Supported 00:29:54.270 SGL Metadata Address: Not Supported 00:29:54.270 SGL Offset: Supported 00:29:54.270 Transport SGL Data Block: Not Supported 00:29:54.270 Replay Protected Memory Block: Not Supported 00:29:54.270 00:29:54.270 Firmware Slot Information 00:29:54.270 ========================= 00:29:54.270 Active slot: 0 00:29:54.270 00:29:54.270 Asymmetric Namespace Access 00:29:54.270 =========================== 00:29:54.270 Change Count : 0 00:29:54.270 Number of ANA Group Descriptors : 1 00:29:54.270 ANA Group Descriptor : 0 00:29:54.270 ANA Group ID : 1 00:29:54.270 Number of NSID Values : 1 00:29:54.270 Change Count : 0 00:29:54.270 ANA State : 1 00:29:54.270 Namespace Identifier : 1 00:29:54.270 00:29:54.270 Commands Supported and Effects 00:29:54.270 ============================== 00:29:54.270 Admin Commands 00:29:54.270 -------------- 00:29:54.270 Get Log Page (02h): Supported 00:29:54.270 Identify (06h): Supported 00:29:54.270 Abort (08h): Supported 00:29:54.270 Set Features (09h): Supported 00:29:54.270 Get Features (0Ah): Supported 00:29:54.270 Asynchronous Event Request (0Ch): Supported 00:29:54.270 Keep Alive (18h): Supported 00:29:54.270 I/O Commands 00:29:54.270 ------------ 00:29:54.270 Flush (00h): Supported 00:29:54.270 Write (01h): Supported LBA-Change 00:29:54.270 Read (02h): Supported 00:29:54.270 Write Zeroes (08h): Supported LBA-Change 00:29:54.270 Dataset Management (09h): Supported 00:29:54.270 00:29:54.270 Error Log 00:29:54.270 ========= 00:29:54.270 Entry: 0 00:29:54.270 Error Count: 0x3 00:29:54.270 Submission Queue Id: 0x0 00:29:54.270 Command Id: 0x5 00:29:54.270 Phase Bit: 0 00:29:54.270 Status Code: 0x2 00:29:54.270 Status Code Type: 0x0 00:29:54.270 Do Not Retry: 1 00:29:54.270 Error Location: 0x28 00:29:54.270 LBA: 0x0 00:29:54.270 Namespace: 0x0 00:29:54.270 Vendor Log Page: 0x0 00:29:54.270 ----------- 00:29:54.270 Entry: 1 00:29:54.270 Error Count: 0x2 00:29:54.270 Submission Queue Id: 0x0 00:29:54.270 Command Id: 0x5 00:29:54.270 Phase Bit: 0 00:29:54.270 Status Code: 0x2 00:29:54.270 Status Code Type: 0x0 00:29:54.270 Do Not Retry: 1 00:29:54.270 Error Location: 0x28 00:29:54.270 LBA: 0x0 00:29:54.270 Namespace: 0x0 00:29:54.270 Vendor Log Page: 0x0 00:29:54.270 ----------- 00:29:54.270 Entry: 2 00:29:54.270 Error Count: 0x1 00:29:54.270 Submission Queue Id: 0x0 00:29:54.270 Command Id: 0x4 00:29:54.270 Phase Bit: 0 00:29:54.270 Status Code: 0x2 00:29:54.270 Status Code Type: 0x0 00:29:54.270 Do Not Retry: 1 00:29:54.270 Error Location: 0x28 00:29:54.270 LBA: 0x0 00:29:54.270 Namespace: 0x0 00:29:54.270 Vendor Log Page: 0x0 00:29:54.270 00:29:54.270 Number of Queues 00:29:54.270 ================ 00:29:54.270 Number of I/O Submission Queues: 128 00:29:54.270 Number of I/O Completion Queues: 128 00:29:54.270 00:29:54.270 ZNS Specific Controller Data 00:29:54.270 ============================ 00:29:54.270 Zone Append Size Limit: 0 00:29:54.270 00:29:54.270 00:29:54.270 Active Namespaces 00:29:54.270 ================= 00:29:54.270 get_feature(0x05) failed 00:29:54.270 Namespace ID:1 00:29:54.270 Command Set Identifier: NVM (00h) 00:29:54.270 Deallocate: Supported 00:29:54.270 Deallocated/Unwritten Error: Not Supported 00:29:54.270 Deallocated Read Value: Unknown 00:29:54.270 Deallocate in Write Zeroes: Not Supported 00:29:54.270 Deallocated Guard Field: 0xFFFF 00:29:54.271 Flush: Supported 00:29:54.271 Reservation: Not Supported 00:29:54.271 Namespace Sharing Capabilities: Multiple Controllers 00:29:54.271 Size (in LBAs): 3907029168 (1863GiB) 00:29:54.271 Capacity (in LBAs): 3907029168 (1863GiB) 00:29:54.271 Utilization (in LBAs): 3907029168 (1863GiB) 00:29:54.271 UUID: d532bfad-dc78-4630-807e-d2bf0b62141f 00:29:54.271 Thin Provisioning: Not Supported 00:29:54.271 Per-NS Atomic Units: Yes 00:29:54.271 Atomic Boundary Size (Normal): 0 00:29:54.271 Atomic Boundary Size (PFail): 0 00:29:54.271 Atomic Boundary Offset: 0 00:29:54.271 NGUID/EUI64 Never Reused: No 00:29:54.271 ANA group ID: 1 00:29:54.271 Namespace Write Protected: No 00:29:54.271 Number of LBA Formats: 1 00:29:54.271 Current LBA Format: LBA Format #00 00:29:54.271 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:54.271 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:54.271 rmmod nvme_tcp 00:29:54.271 rmmod nvme_fabrics 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.271 12:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.814 12:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:56.814 12:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:56.814 12:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:56.814 12:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:29:56.814 12:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:56.814 12:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:56.814 12:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:56.814 12:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:56.814 12:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:56.814 12:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:56.814 12:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:01.023 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:01.023 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:02.406 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:30:02.406 00:30:02.406 real 0m22.297s 00:30:02.406 user 0m5.731s 00:30:02.406 sys 0m11.874s 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:02.406 ************************************ 00:30:02.406 END TEST nvmf_identify_kernel_target 00:30:02.406 ************************************ 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.406 ************************************ 00:30:02.406 START TEST nvmf_auth_host 00:30:02.406 ************************************ 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:02.406 * Looking for test storage... 00:30:02.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.406 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:02.407 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:10.550 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:10.550 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:10.550 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:10.551 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:10.551 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.551 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:10.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:30:10.812 00:30:10.812 --- 10.0.0.2 ping statistics --- 00:30:10.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.812 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:30:10.812 00:30:10.812 --- 10.0.0.1 ping statistics --- 00:30:10.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.812 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:10.812 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=585580 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 585580 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 585580 ']' 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.072 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4f334e5c7253291bbee39eecf30c0b43 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oAB 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4f334e5c7253291bbee39eecf30c0b43 0 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4f334e5c7253291bbee39eecf30c0b43 0 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4f334e5c7253291bbee39eecf30c0b43 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oAB 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oAB 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.oAB 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6cf06d469dc00818d0ddcf891679a0e59e5318330efed2f673ebe8f58b36c008 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.FGT 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6cf06d469dc00818d0ddcf891679a0e59e5318330efed2f673ebe8f58b36c008 3 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6cf06d469dc00818d0ddcf891679a0e59e5318330efed2f673ebe8f58b36c008 3 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6cf06d469dc00818d0ddcf891679a0e59e5318330efed2f673ebe8f58b36c008 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:30:12.457 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.FGT 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.FGT 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.FGT 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b3535319614afc9992ca47769f7b9664882925e961ae25a4 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oUE 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b3535319614afc9992ca47769f7b9664882925e961ae25a4 0 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b3535319614afc9992ca47769f7b9664882925e961ae25a4 0 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b3535319614afc9992ca47769f7b9664882925e961ae25a4 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oUE 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oUE 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.oUE 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=708eea168ad74c55f11e0f7ea884129b1bc8fd7a8573d916 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wch 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 708eea168ad74c55f11e0f7ea884129b1bc8fd7a8573d916 2 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 708eea168ad74c55f11e0f7ea884129b1bc8fd7a8573d916 2 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=708eea168ad74c55f11e0f7ea884129b1bc8fd7a8573d916 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:30:12.458 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:12.719 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wch 00:30:12.719 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wch 00:30:12.719 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.wch 00:30:12.719 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:12.719 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:12.719 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:12.719 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:12.719 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:30:12.719 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:30:12.719 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7b80b2d88b78abed2f6e341fb6745d03 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7sQ 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7b80b2d88b78abed2f6e341fb6745d03 1 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7b80b2d88b78abed2f6e341fb6745d03 1 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7b80b2d88b78abed2f6e341fb6745d03 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7sQ 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7sQ 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.7sQ 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4ad37c8dc35ad911995e8abd851febf4 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.lZw 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4ad37c8dc35ad911995e8abd851febf4 1 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4ad37c8dc35ad911995e8abd851febf4 1 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4ad37c8dc35ad911995e8abd851febf4 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:30:12.720 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.lZw 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.lZw 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.lZw 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aaf7fadc90eef1b73c4ad5587ab4896f63d34d3b34859944 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Bmd 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aaf7fadc90eef1b73c4ad5587ab4896f63d34d3b34859944 2 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aaf7fadc90eef1b73c4ad5587ab4896f63d34d3b34859944 2 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aaf7fadc90eef1b73c4ad5587ab4896f63d34d3b34859944 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Bmd 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Bmd 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Bmd 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f6e7670fc9cbef4d26a69a4d48be8cad 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.s6X 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f6e7670fc9cbef4d26a69a4d48be8cad 0 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f6e7670fc9cbef4d26a69a4d48be8cad 0 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f6e7670fc9cbef4d26a69a4d48be8cad 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:30:12.720 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.s6X 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.s6X 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.s6X 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0b78f0d9a41440c599d3bc6f4224f7e3935afad52ffeaaf56dc7e584f1516431 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.e89 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0b78f0d9a41440c599d3bc6f4224f7e3935afad52ffeaaf56dc7e584f1516431 3 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0b78f0d9a41440c599d3bc6f4224f7e3935afad52ffeaaf56dc7e584f1516431 3 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0b78f0d9a41440c599d3bc6f4224f7e3935afad52ffeaaf56dc7e584f1516431 00:30:12.981 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.e89 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.e89 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.e89 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 585580 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 585580 ']' 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:12.982 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oAB 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.FGT ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FGT 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.oUE 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.wch ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wch 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.7sQ 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.lZw ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lZw 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Bmd 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.s6X ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.s6X 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.e89 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:30:13.555 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:13.556 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:17.761 Waiting for block devices as requested 00:30:17.761 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:17.761 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:17.761 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:17.761 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:17.761 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:17.761 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:18.040 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:18.040 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:18.040 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:30:18.301 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:18.301 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:18.301 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:18.301 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:18.561 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:18.561 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:18.561 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:18.822 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:19.393 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:19.394 No valid GPT data, bailing 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:30:19.394 00:30:19.394 Discovery Log Number of Records 2, Generation counter 2 00:30:19.394 =====Discovery Log Entry 0====== 00:30:19.394 trtype: tcp 00:30:19.394 adrfam: ipv4 00:30:19.394 subtype: current discovery subsystem 00:30:19.394 treq: not specified, sq flow control disable supported 00:30:19.394 portid: 1 00:30:19.394 trsvcid: 4420 00:30:19.394 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:19.394 traddr: 10.0.0.1 00:30:19.394 eflags: none 00:30:19.394 sectype: none 00:30:19.394 =====Discovery Log Entry 1====== 00:30:19.394 trtype: tcp 00:30:19.394 adrfam: ipv4 00:30:19.394 subtype: nvme subsystem 00:30:19.394 treq: not specified, sq flow control disable supported 00:30:19.394 portid: 1 00:30:19.394 trsvcid: 4420 00:30:19.394 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:19.394 traddr: 10.0.0.1 00:30:19.394 eflags: none 00:30:19.394 sectype: none 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.394 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.655 nvme0n1 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:19.655 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.655 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:19.655 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:19.655 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:19.655 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:19.655 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.655 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.916 nvme0n1 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.916 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.175 nvme0n1 00:30:20.175 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.175 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.175 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.175 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.175 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.175 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.175 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.175 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.175 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.175 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.175 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.176 nvme0n1 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.176 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.436 nvme0n1 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:20.436 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.696 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.696 nvme0n1 00:30:20.696 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.696 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.696 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.696 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.696 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.697 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.957 nvme0n1 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.957 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.218 nvme0n1 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.218 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.478 nvme0n1 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.478 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.738 nvme0n1 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.738 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.998 nvme0n1 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.998 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.258 nvme0n1 00:30:22.258 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.258 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.258 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.258 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:22.258 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.258 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.258 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.258 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:22.258 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.258 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.539 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.800 nvme0n1 00:30:22.800 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.800 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.800 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:22.800 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.800 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.800 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.800 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.062 nvme0n1 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.062 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.324 nvme0n1 00:30:23.324 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.324 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.324 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:23.324 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.324 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.324 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.324 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.324 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:23.324 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.324 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.586 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.847 nvme0n1 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.847 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.848 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.418 nvme0n1 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:24.418 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.419 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.990 nvme0n1 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:24.990 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:24.991 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:24.991 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.991 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.588 nvme0n1 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:25.588 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.589 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.907 nvme0n1 00:30:25.907 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.907 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.907 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:25.907 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.907 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.907 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.907 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.908 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.481 nvme0n1 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:26.481 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.482 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.422 nvme0n1 00:30:27.422 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.422 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.422 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:27.422 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.422 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.422 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.423 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.994 nvme0n1 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.994 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.935 nvme0n1 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.935 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.504 nvme0n1 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.504 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.505 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.442 nvme0n1 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.442 nvme0n1 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.442 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.704 12:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.704 nvme0n1 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.704 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.705 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.965 nvme0n1 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:30.965 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.966 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.225 nvme0n1 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.225 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.485 nvme0n1 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.485 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.746 nvme0n1 00:30:31.746 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.746 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.746 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:31.746 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.746 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.746 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.746 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.007 nvme0n1 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.007 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.267 nvme0n1 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.267 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.527 nvme0n1 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:32.527 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.528 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.788 nvme0n1 00:30:32.788 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.788 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.788 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:32.788 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.788 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.788 12:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.788 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.048 nvme0n1 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:33.048 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.049 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.308 nvme0n1 00:30:33.309 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.309 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:33.309 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:33.309 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.309 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.309 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.569 12:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.837 nvme0n1 00:30:33.837 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.837 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:33.837 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:33.837 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.837 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.837 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.837 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.837 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.837 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.837 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.837 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.838 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.098 nvme0n1 00:30:34.098 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.098 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.098 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:34.098 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.098 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.098 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.098 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.098 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.098 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.098 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.099 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.359 nvme0n1 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.359 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.619 12:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.880 nvme0n1 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.880 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.450 nvme0n1 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.450 12:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.020 nvme0n1 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.020 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.590 nvme0n1 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.590 12:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.850 nvme0n1 00:30:36.850 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.850 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:36.850 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.850 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:36.850 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.850 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:37.111 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:37.112 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.112 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.682 nvme0n1 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:37.682 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:37.683 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:37.683 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:30:37.683 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:37.683 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:37.683 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:37.683 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:37.683 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:37.683 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:37.683 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.683 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.942 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.943 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.513 nvme0n1 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.513 12:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.455 nvme0n1 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:39.455 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.456 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:39.456 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:39.456 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:39.456 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:39.456 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.456 12:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.396 nvme0n1 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.396 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.397 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:40.397 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.397 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:40.397 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:40.397 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:40.397 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:40.397 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.397 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.968 nvme0n1 00:30:40.968 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.968 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.968 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.968 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:40.968 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.968 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:30:41.229 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.230 nvme0n1 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:41.230 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:30:41.490 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:41.490 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:41.490 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.491 nvme0n1 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.491 12:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.752 nvme0n1 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:41.752 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:41.753 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.753 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.013 nvme0n1 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.013 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.014 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.274 nvme0n1 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.274 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.535 nvme0n1 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.535 nvme0n1 00:30:42.535 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.796 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.796 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.796 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.796 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.796 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.796 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.796 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.796 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.796 12:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.796 nvme0n1 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.796 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.057 nvme0n1 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.057 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:43.317 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.318 nvme0n1 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.318 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.579 12:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.888 nvme0n1 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.888 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.889 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.149 nvme0n1 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.149 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.410 nvme0n1 00:30:44.410 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.410 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:44.410 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:44.410 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.410 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.410 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.671 12:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.931 nvme0n1 00:30:44.931 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.931 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:44.931 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:44.931 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.931 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.931 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.931 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.931 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.931 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.931 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.931 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.932 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.193 nvme0n1 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.193 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.763 nvme0n1 00:30:45.763 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.763 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.763 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.763 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.763 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.763 12:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.763 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.334 nvme0n1 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.334 12:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.905 nvme0n1 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.905 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.476 nvme0n1 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.476 12:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.737 nvme0n1 00:30:47.737 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.737 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:47.737 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.737 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:47.737 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.737 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:47.997 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYzMzRlNWM3MjUzMjkxYmJlZTM5ZWVjZjMwYzBiNDMrIfGz: 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: ]] 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNmMDZkNDY5ZGMwMDgxOGQwZGRjZjg5MTY3OWEwZTU5ZTUzMTgzMzBlZmVkMmY2NzNlYmU4ZjU4YjM2YzAwOOa4no8=: 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.998 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.581 nvme0n1 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.581 12:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.520 nvme0n1 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4MGIyZDg4Yjc4YWJlZDJmNmUzNDFmYjY3NDVkMDOW8cUD: 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: ]] 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGFkMzdjOGRjMzVhZDkxMTk5NWU4YWJkODUxZmViZjTH5Bym: 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:49.520 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.521 12:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.094 nvme0n1 00:30:50.094 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.094 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.094 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:50.094 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.094 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.094 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.094 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.094 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.094 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.094 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFmN2ZhZGM5MGVlZjFiNzNjNGFkNTU4N2FiNDg5NmY2M2QzNGQzYjM0ODU5OTQ018aOdw==: 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: ]] 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjZlNzY3MGZjOWNiZWY0ZDI2YTY5YTRkNDhiZThjYWQ8Zojk: 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.355 12:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.930 nvme0n1 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGI3OGYwZDlhNDE0NDBjNTk5ZDNiYzZmNDIyNGY3ZTM5MzVhZmFkNTJmZmVhYWY1NmRjN2U1ODRmMTUxNjQzMdxL6/8=: 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:50.930 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.931 12:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.872 nvme0n1 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM1MzUzMTk2MTRhZmM5OTkyY2E0Nzc2OWY3Yjk2NjQ4ODI5MjVlOTYxYWUyNWE0h7u/bg==: 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: ]] 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA4ZWVhMTY4YWQ3NGM1NWYxMWUwZjdlYTg4NDEyOWIxYmM4ZmQ3YTg1NzNkOTE2aVixUg==: 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:51.872 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.873 request: 00:30:51.873 { 00:30:51.873 "name": "nvme0", 00:30:51.873 "trtype": "tcp", 00:30:51.873 "traddr": "10.0.0.1", 00:30:51.873 "adrfam": "ipv4", 00:30:51.873 "trsvcid": "4420", 00:30:51.873 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:51.873 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:51.873 "prchk_reftag": false, 00:30:51.873 "prchk_guard": false, 00:30:51.873 "hdgst": false, 00:30:51.873 "ddgst": false, 00:30:51.873 "method": "bdev_nvme_attach_controller", 00:30:51.873 "req_id": 1 00:30:51.873 } 00:30:51.873 Got JSON-RPC error response 00:30:51.873 response: 00:30:51.873 { 00:30:51.873 "code": -5, 00:30:51.873 "message": "Input/output error" 00:30:51.873 } 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.873 request: 00:30:51.873 { 00:30:51.873 "name": "nvme0", 00:30:51.873 "trtype": "tcp", 00:30:51.873 "traddr": "10.0.0.1", 00:30:51.873 "adrfam": "ipv4", 00:30:51.873 "trsvcid": "4420", 00:30:51.873 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:51.873 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:51.873 "prchk_reftag": false, 00:30:51.873 "prchk_guard": false, 00:30:51.873 "hdgst": false, 00:30:51.873 "ddgst": false, 00:30:51.873 "dhchap_key": "key2", 00:30:51.873 "method": "bdev_nvme_attach_controller", 00:30:51.873 "req_id": 1 00:30:51.873 } 00:30:51.873 Got JSON-RPC error response 00:30:51.873 response: 00:30:51.873 { 00:30:51.873 "code": -5, 00:30:51.873 "message": "Input/output error" 00:30:51.873 } 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.873 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.134 request: 00:30:52.134 { 00:30:52.134 "name": "nvme0", 00:30:52.134 "trtype": "tcp", 00:30:52.134 "traddr": "10.0.0.1", 00:30:52.134 "adrfam": "ipv4", 00:30:52.134 "trsvcid": "4420", 00:30:52.134 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:52.134 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:52.134 "prchk_reftag": false, 00:30:52.134 "prchk_guard": false, 00:30:52.134 "hdgst": false, 00:30:52.134 "ddgst": false, 00:30:52.134 "dhchap_key": "key1", 00:30:52.134 "dhchap_ctrlr_key": "ckey2", 00:30:52.134 "method": "bdev_nvme_attach_controller", 00:30:52.134 "req_id": 1 00:30:52.134 } 00:30:52.134 Got JSON-RPC error response 00:30:52.134 response: 00:30:52.134 { 00:30:52.134 "code": -5, 00:30:52.134 "message": "Input/output error" 00:30:52.134 } 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:52.134 rmmod nvme_tcp 00:30:52.134 rmmod nvme_fabrics 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 585580 ']' 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 585580 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 585580 ']' 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 585580 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 585580 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 585580' 00:30:52.134 killing process with pid 585580 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 585580 00:30:52.134 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 585580 00:30:52.394 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:52.394 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:52.394 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:52.394 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.394 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:52.394 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.394 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.394 12:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:54.306 12:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:58.512 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:58.512 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:58.512 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:58.512 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:58.512 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:58.513 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:58.513 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:58.513 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:58.513 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:58.513 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:58.513 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:58.513 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:58.513 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:58.513 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:58.513 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:58.513 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:00.424 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:31:00.424 12:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.oAB /tmp/spdk.key-null.oUE /tmp/spdk.key-sha256.7sQ /tmp/spdk.key-sha384.Bmd /tmp/spdk.key-sha512.e89 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:31:00.424 12:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:04.629 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:04.629 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:04.629 00:31:04.629 real 1m2.056s 00:31:04.629 user 0m53.572s 00:31:04.629 sys 0m17.000s 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.629 ************************************ 00:31:04.629 END TEST nvmf_auth_host 00:31:04.629 ************************************ 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.629 ************************************ 00:31:04.629 START TEST nvmf_digest 00:31:04.629 ************************************ 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:04.629 * Looking for test storage... 00:31:04.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.629 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:31:04.630 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.762 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:12.763 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:12.763 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:12.763 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:12.763 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.763 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.023 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.023 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.023 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:13.023 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.023 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.023 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.023 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:13.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:31:13.023 00:31:13.023 --- 10.0.0.2 ping statistics --- 00:31:13.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.023 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:31:13.023 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:31:13.284 00:31:13.284 --- 10.0.0.1 ping statistics --- 00:31:13.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.284 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:13.284 ************************************ 00:31:13.284 START TEST nvmf_digest_clean 00:31:13.284 ************************************ 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=601616 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 601616 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 601616 ']' 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:13.284 12:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:13.284 [2024-07-25 12:43:46.586837] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:31:13.284 [2024-07-25 12:43:46.586893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.284 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.284 [2024-07-25 12:43:46.678218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.544 [2024-07-25 12:43:46.768940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.544 [2024-07-25 12:43:46.768998] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.544 [2024-07-25 12:43:46.769005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.544 [2024-07-25 12:43:46.769012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.544 [2024-07-25 12:43:46.769017] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.544 [2024-07-25 12:43:46.769042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.115 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:14.115 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:31:14.115 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:14.115 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:14.115 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:14.115 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.115 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:31:14.115 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:31:14.115 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:31:14.115 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.115 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:14.375 null0 00:31:14.375 [2024-07-25 12:43:47.583527] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.375 [2024-07-25 12:43:47.607828] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.375 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.375 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:31:14.375 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=601664 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 601664 /var/tmp/bperf.sock 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 601664 ']' 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:14.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:14.376 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:14.376 [2024-07-25 12:43:47.674290] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:31:14.376 [2024-07-25 12:43:47.674361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601664 ] 00:31:14.376 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.376 [2024-07-25 12:43:47.759613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.636 [2024-07-25 12:43:47.868745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.207 12:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:15.207 12:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:31:15.207 12:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:15.207 12:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:15.207 12:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:15.467 12:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:15.467 12:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:15.727 nvme0n1 00:31:15.988 12:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:15.988 12:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:15.988 Running I/O for 2 seconds... 00:31:17.906 00:31:17.906 Latency(us) 00:31:17.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.906 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:17.906 nvme0n1 : 2.01 15070.62 58.87 0.00 0.00 8482.28 4965.61 26012.75 00:31:17.906 =================================================================================================================== 00:31:17.906 Total : 15070.62 58.87 0.00 0.00 8482.28 4965.61 26012.75 00:31:17.906 0 00:31:17.906 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:17.906 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:17.906 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:17.906 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:17.906 | select(.opcode=="crc32c") 00:31:17.906 | "\(.module_name) \(.executed)"' 00:31:17.906 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 601664 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 601664 ']' 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 601664 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 601664 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 601664' 00:31:18.166 killing process with pid 601664 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 601664 00:31:18.166 Received shutdown signal, test time was about 2.000000 seconds 00:31:18.166 00:31:18.166 Latency(us) 00:31:18.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.166 =================================================================================================================== 00:31:18.166 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:18.166 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 601664 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=602362 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 602362 /var/tmp/bperf.sock 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 602362 ']' 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:18.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:18.426 12:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:18.426 [2024-07-25 12:43:51.754308] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:31:18.426 [2024-07-25 12:43:51.754387] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602362 ] 00:31:18.426 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:18.426 Zero copy mechanism will not be used. 00:31:18.426 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.426 [2024-07-25 12:43:51.834366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.686 [2024-07-25 12:43:51.912229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.256 12:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:19.256 12:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:31:19.256 12:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:19.256 12:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:19.256 12:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:19.515 12:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:19.515 12:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:20.086 nvme0n1 00:31:20.086 12:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:20.086 12:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:20.086 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:20.086 Zero copy mechanism will not be used. 00:31:20.086 Running I/O for 2 seconds... 00:31:21.995 00:31:21.995 Latency(us) 00:31:21.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.995 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:21.995 nvme0n1 : 2.00 3785.79 473.22 0.00 0.00 4221.02 800.30 13107.20 00:31:21.995 =================================================================================================================== 00:31:21.995 Total : 3785.79 473.22 0.00 0.00 4221.02 800.30 13107.20 00:31:21.995 0 00:31:21.995 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:21.995 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:21.995 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:21.995 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:21.995 | select(.opcode=="crc32c") 00:31:21.995 | "\(.module_name) \(.executed)"' 00:31:21.995 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 602362 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 602362 ']' 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 602362 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 602362 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 602362' 00:31:22.277 killing process with pid 602362 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 602362 00:31:22.277 Received shutdown signal, test time was about 2.000000 seconds 00:31:22.277 00:31:22.277 Latency(us) 00:31:22.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.277 =================================================================================================================== 00:31:22.277 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:22.277 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 602362 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=603140 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 603140 /var/tmp/bperf.sock 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 603140 ']' 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:22.592 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:22.593 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:22.593 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:22.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:22.593 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:22.593 12:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:22.593 [2024-07-25 12:43:55.848106] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:31:22.593 [2024-07-25 12:43:55.848178] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603140 ] 00:31:22.593 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.593 [2024-07-25 12:43:55.924571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.593 [2024-07-25 12:43:56.001925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.554 12:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:23.554 12:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:31:23.554 12:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:23.554 12:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:23.554 12:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:23.554 12:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:23.554 12:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:24.124 nvme0n1 00:31:24.124 12:43:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:24.124 12:43:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:24.124 Running I/O for 2 seconds... 00:31:26.035 00:31:26.035 Latency(us) 00:31:26.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.035 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:26.035 nvme0n1 : 2.01 23126.49 90.34 0.00 0.00 5523.49 3377.62 15728.64 00:31:26.035 =================================================================================================================== 00:31:26.035 Total : 23126.49 90.34 0.00 0.00 5523.49 3377.62 15728.64 00:31:26.035 0 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:26.296 | select(.opcode=="crc32c") 00:31:26.296 | "\(.module_name) \(.executed)"' 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 603140 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 603140 ']' 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 603140 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:26.296 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 603140 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 603140' 00:31:26.556 killing process with pid 603140 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 603140 00:31:26.556 Received shutdown signal, test time was about 2.000000 seconds 00:31:26.556 00:31:26.556 Latency(us) 00:31:26.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.556 =================================================================================================================== 00:31:26.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 603140 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=603806 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 603806 /var/tmp/bperf.sock 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 603806 ']' 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:26.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:26.556 12:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:26.556 [2024-07-25 12:43:59.936032] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:31:26.556 [2024-07-25 12:43:59.936084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603806 ] 00:31:26.556 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:26.556 Zero copy mechanism will not be used. 00:31:26.556 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.817 [2024-07-25 12:44:00.013143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.817 [2024-07-25 12:44:00.089878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.386 12:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:27.386 12:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:31:27.386 12:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:27.386 12:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:27.386 12:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:27.957 12:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:27.957 12:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:28.217 nvme0n1 00:31:28.217 12:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:28.217 12:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:28.217 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:28.217 Zero copy mechanism will not be used. 00:31:28.217 Running I/O for 2 seconds... 00:31:30.764 00:31:30.764 Latency(us) 00:31:30.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.764 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:30.764 nvme0n1 : 2.01 5047.86 630.98 0.00 0.00 3160.82 1354.83 7259.37 00:31:30.764 =================================================================================================================== 00:31:30.764 Total : 5047.86 630.98 0.00 0.00 3160.82 1354.83 7259.37 00:31:30.764 0 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:30.764 | select(.opcode=="crc32c") 00:31:30.764 | "\(.module_name) \(.executed)"' 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 603806 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 603806 ']' 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 603806 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 603806 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 603806' 00:31:30.764 killing process with pid 603806 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 603806 00:31:30.764 Received shutdown signal, test time was about 2.000000 seconds 00:31:30.764 00:31:30.764 Latency(us) 00:31:30.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.764 =================================================================================================================== 00:31:30.764 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:30.764 12:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 603806 00:31:30.764 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 601616 00:31:30.764 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 601616 ']' 00:31:30.764 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 601616 00:31:30.764 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:31:30.764 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:30.764 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 601616 00:31:30.764 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:30.764 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:30.764 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 601616' 00:31:30.764 killing process with pid 601616 00:31:30.764 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 601616 00:31:30.764 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 601616 00:31:31.024 00:31:31.024 real 0m17.685s 00:31:31.024 user 0m35.317s 00:31:31.024 sys 0m3.797s 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:31.024 ************************************ 00:31:31.024 END TEST nvmf_digest_clean 00:31:31.024 ************************************ 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.024 ************************************ 00:31:31.024 START TEST nvmf_digest_error 00:31:31.024 ************************************ 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=604462 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 604462 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 604462 ']' 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:31.024 12:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:31.024 [2024-07-25 12:44:04.349338] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:31:31.024 [2024-07-25 12:44:04.349387] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.024 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.024 [2024-07-25 12:44:04.438958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.284 [2024-07-25 12:44:04.502262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.284 [2024-07-25 12:44:04.502294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.284 [2024-07-25 12:44:04.502301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.284 [2024-07-25 12:44:04.502306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.284 [2024-07-25 12:44:04.502312] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.284 [2024-07-25 12:44:04.502328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.853 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:31.853 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:31:31.853 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:31.853 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:31.853 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:31.853 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:31.853 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:31.853 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.854 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:31.854 [2024-07-25 12:44:05.232383] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:31.854 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.854 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:31.854 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:31.854 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.854 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:32.113 null0 00:31:32.113 [2024-07-25 12:44:05.311167] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.113 [2024-07-25 12:44:05.335368] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=604776 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 604776 /var/tmp/bperf.sock 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 604776 ']' 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:32.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:32.114 12:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:32.114 [2024-07-25 12:44:05.390962] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:31:32.114 [2024-07-25 12:44:05.391010] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604776 ] 00:31:32.114 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.114 [2024-07-25 12:44:05.467573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.374 [2024-07-25 12:44:05.547195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.944 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:32.944 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:31:32.944 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:32.944 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:33.204 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:33.204 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.204 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:33.204 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.204 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:33.204 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:33.464 nvme0n1 00:31:33.464 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:33.464 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.464 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:33.464 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.464 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:33.464 12:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:33.464 Running I/O for 2 seconds... 00:31:33.464 [2024-07-25 12:44:06.841031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.464 [2024-07-25 12:44:06.841083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.464 [2024-07-25 12:44:06.841100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.464 [2024-07-25 12:44:06.853616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.464 [2024-07-25 12:44:06.853649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.464 [2024-07-25 12:44:06.853663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.464 [2024-07-25 12:44:06.870474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.465 [2024-07-25 12:44:06.870505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.465 [2024-07-25 12:44:06.870517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:06.889340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:06.889369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-07-25 12:44:06.889381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:06.902284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:06.902312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-07-25 12:44:06.902324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:06.920743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:06.920773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-07-25 12:44:06.920785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:06.938082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:06.938110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-07-25 12:44:06.938121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:06.951823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:06.951867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-07-25 12:44:06.951879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:06.970827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:06.970855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-07-25 12:44:06.970867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:06.989288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:06.989316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-07-25 12:44:06.989328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:07.006490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:07.006517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-07-25 12:44:07.006530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:07.020780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:07.020808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-07-25 12:44:07.020820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:07.040528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:07.040560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-07-25 12:44:07.040573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:07.059898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:07.059926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.725 [2024-07-25 12:44:07.059938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.725 [2024-07-25 12:44:07.072959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.725 [2024-07-25 12:44:07.072986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.726 [2024-07-25 12:44:07.072998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.726 [2024-07-25 12:44:07.091235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.726 [2024-07-25 12:44:07.091263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.726 [2024-07-25 12:44:07.091274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.726 [2024-07-25 12:44:07.109024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.726 [2024-07-25 12:44:07.109053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.726 [2024-07-25 12:44:07.109064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.726 [2024-07-25 12:44:07.123472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.726 [2024-07-25 12:44:07.123500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.726 [2024-07-25 12:44:07.123519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.726 [2024-07-25 12:44:07.142180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.726 [2024-07-25 12:44:07.142208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.726 [2024-07-25 12:44:07.142220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.986 [2024-07-25 12:44:07.159649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.986 [2024-07-25 12:44:07.159677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.986 [2024-07-25 12:44:07.159689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.986 [2024-07-25 12:44:07.174022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.986 [2024-07-25 12:44:07.174049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.986 [2024-07-25 12:44:07.174060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.986 [2024-07-25 12:44:07.192398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.986 [2024-07-25 12:44:07.192425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.986 [2024-07-25 12:44:07.192437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.986 [2024-07-25 12:44:07.207247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.986 [2024-07-25 12:44:07.207274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.986 [2024-07-25 12:44:07.207285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.986 [2024-07-25 12:44:07.221430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.986 [2024-07-25 12:44:07.221458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.986 [2024-07-25 12:44:07.221471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.986 [2024-07-25 12:44:07.237448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.986 [2024-07-25 12:44:07.237476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.986 [2024-07-25 12:44:07.237488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.986 [2024-07-25 12:44:07.249740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.986 [2024-07-25 12:44:07.249767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.986 [2024-07-25 12:44:07.249779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.986 [2024-07-25 12:44:07.266892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.986 [2024-07-25 12:44:07.266920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.986 [2024-07-25 12:44:07.266932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.986 [2024-07-25 12:44:07.280410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.986 [2024-07-25 12:44:07.280436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.986 [2024-07-25 12:44:07.280448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.986 [2024-07-25 12:44:07.298175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.986 [2024-07-25 12:44:07.298203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.986 [2024-07-25 12:44:07.298214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.986 [2024-07-25 12:44:07.316206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.987 [2024-07-25 12:44:07.316235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.987 [2024-07-25 12:44:07.316246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.987 [2024-07-25 12:44:07.330322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.987 [2024-07-25 12:44:07.330350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.987 [2024-07-25 12:44:07.330362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.987 [2024-07-25 12:44:07.347566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.987 [2024-07-25 12:44:07.347594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.987 [2024-07-25 12:44:07.347606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.987 [2024-07-25 12:44:07.361535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.987 [2024-07-25 12:44:07.361570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.987 [2024-07-25 12:44:07.361583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.987 [2024-07-25 12:44:07.378354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.987 [2024-07-25 12:44:07.378382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.987 [2024-07-25 12:44:07.378394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.987 [2024-07-25 12:44:07.391425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:33.987 [2024-07-25 12:44:07.391452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.987 [2024-07-25 12:44:07.391469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.247 [2024-07-25 12:44:07.407098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.247 [2024-07-25 12:44:07.407127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.247 [2024-07-25 12:44:07.407139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.247 [2024-07-25 12:44:07.425343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.247 [2024-07-25 12:44:07.425371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.247 [2024-07-25 12:44:07.425383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.247 [2024-07-25 12:44:07.438227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.247 [2024-07-25 12:44:07.438252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.247 [2024-07-25 12:44:07.438264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.247 [2024-07-25 12:44:07.456940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.247 [2024-07-25 12:44:07.456967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.247 [2024-07-25 12:44:07.456979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.247 [2024-07-25 12:44:07.470741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.247 [2024-07-25 12:44:07.470768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.247 [2024-07-25 12:44:07.470779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.247 [2024-07-25 12:44:07.488672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.247 [2024-07-25 12:44:07.488699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.248 [2024-07-25 12:44:07.488711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.248 [2024-07-25 12:44:07.505961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.248 [2024-07-25 12:44:07.505988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.248 [2024-07-25 12:44:07.506000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.248 [2024-07-25 12:44:07.521504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.248 [2024-07-25 12:44:07.521531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.248 [2024-07-25 12:44:07.521544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.248 [2024-07-25 12:44:07.540247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.248 [2024-07-25 12:44:07.540280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.248 [2024-07-25 12:44:07.540292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.248 [2024-07-25 12:44:07.557350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.248 [2024-07-25 12:44:07.557377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.248 [2024-07-25 12:44:07.557389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.248 [2024-07-25 12:44:07.571084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.248 [2024-07-25 12:44:07.571111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.248 [2024-07-25 12:44:07.571123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.248 [2024-07-25 12:44:07.590530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.248 [2024-07-25 12:44:07.590562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.248 [2024-07-25 12:44:07.590574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.248 [2024-07-25 12:44:07.603938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.248 [2024-07-25 12:44:07.603965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.248 [2024-07-25 12:44:07.603978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.248 [2024-07-25 12:44:07.622628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.248 [2024-07-25 12:44:07.622656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.248 [2024-07-25 12:44:07.622668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.248 [2024-07-25 12:44:07.639362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.248 [2024-07-25 12:44:07.639390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.248 [2024-07-25 12:44:07.639401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.248 [2024-07-25 12:44:07.655567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.248 [2024-07-25 12:44:07.655594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.248 [2024-07-25 12:44:07.655605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.670914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.670941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.670953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.689133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.689161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.689173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.705072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.705100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.705112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.720057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.720083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.720095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.737478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.737505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.737517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.755916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.755943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.755954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.770726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.770753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.770765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.784581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.784609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.784620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.799922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.799951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.799963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.812308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.812335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.812351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.825539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.825570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.825582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.840352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.840379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.840391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.857945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.857972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.509 [2024-07-25 12:44:07.857984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.509 [2024-07-25 12:44:07.871661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.509 [2024-07-25 12:44:07.871689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.510 [2024-07-25 12:44:07.871701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.510 [2024-07-25 12:44:07.891314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.510 [2024-07-25 12:44:07.891342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.510 [2024-07-25 12:44:07.891354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.510 [2024-07-25 12:44:07.904219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.510 [2024-07-25 12:44:07.904247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.510 [2024-07-25 12:44:07.904259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.510 [2024-07-25 12:44:07.921991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.510 [2024-07-25 12:44:07.922019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.510 [2024-07-25 12:44:07.922031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.770 [2024-07-25 12:44:07.942163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.770 [2024-07-25 12:44:07.942190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.770 [2024-07-25 12:44:07.942202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.770 [2024-07-25 12:44:07.954679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.770 [2024-07-25 12:44:07.954711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.770 [2024-07-25 12:44:07.954723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.770 [2024-07-25 12:44:07.973242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.770 [2024-07-25 12:44:07.973270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.770 [2024-07-25 12:44:07.973283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.770 [2024-07-25 12:44:07.990650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.770 [2024-07-25 12:44:07.990677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.770 [2024-07-25 12:44:07.990689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.770 [2024-07-25 12:44:08.004410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.770 [2024-07-25 12:44:08.004437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.770 [2024-07-25 12:44:08.004449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.770 [2024-07-25 12:44:08.018955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.770 [2024-07-25 12:44:08.018982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.770 [2024-07-25 12:44:08.018994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.770 [2024-07-25 12:44:08.032739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.770 [2024-07-25 12:44:08.032766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.770 [2024-07-25 12:44:08.032778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.770 [2024-07-25 12:44:08.048131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.770 [2024-07-25 12:44:08.048159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.770 [2024-07-25 12:44:08.048171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.770 [2024-07-25 12:44:08.060857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.770 [2024-07-25 12:44:08.060884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.770 [2024-07-25 12:44:08.060897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.771 [2024-07-25 12:44:08.075194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.771 [2024-07-25 12:44:08.075222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.771 [2024-07-25 12:44:08.075235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.771 [2024-07-25 12:44:08.089411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.771 [2024-07-25 12:44:08.089439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.771 [2024-07-25 12:44:08.089451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.771 [2024-07-25 12:44:08.107033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.771 [2024-07-25 12:44:08.107060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.771 [2024-07-25 12:44:08.107073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.771 [2024-07-25 12:44:08.120846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.771 [2024-07-25 12:44:08.120873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.771 [2024-07-25 12:44:08.120886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.771 [2024-07-25 12:44:08.134847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.771 [2024-07-25 12:44:08.134874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.771 [2024-07-25 12:44:08.134886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.771 [2024-07-25 12:44:08.147046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.771 [2024-07-25 12:44:08.147073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.771 [2024-07-25 12:44:08.147085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.771 [2024-07-25 12:44:08.163064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.771 [2024-07-25 12:44:08.163092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.771 [2024-07-25 12:44:08.163104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.771 [2024-07-25 12:44:08.176915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:34.771 [2024-07-25 12:44:08.176941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.771 [2024-07-25 12:44:08.176953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.031 [2024-07-25 12:44:08.195698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.031 [2024-07-25 12:44:08.195728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.031 [2024-07-25 12:44:08.195741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.031 [2024-07-25 12:44:08.209254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.031 [2024-07-25 12:44:08.209283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.031 [2024-07-25 12:44:08.209302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.031 [2024-07-25 12:44:08.227638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.031 [2024-07-25 12:44:08.227665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.227677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.244351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.244378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.244389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.258477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.258505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.258517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.275850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.275878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.275890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.295057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.295085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.295097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.313077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.313105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.313117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.331530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.331563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.331575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.344829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.344856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.344868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.362472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.362500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.362511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.381710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.381736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.381749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.399894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.399922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.399934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.417420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.417447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.417459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.431907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.431935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.431949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.032 [2024-07-25 12:44:08.450083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.032 [2024-07-25 12:44:08.450111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.032 [2024-07-25 12:44:08.450123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.293 [2024-07-25 12:44:08.469073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.293 [2024-07-25 12:44:08.469101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.293 [2024-07-25 12:44:08.469113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.293 [2024-07-25 12:44:08.482508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.293 [2024-07-25 12:44:08.482535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.293 [2024-07-25 12:44:08.482554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.293 [2024-07-25 12:44:08.500967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.293 [2024-07-25 12:44:08.500996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.293 [2024-07-25 12:44:08.501013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.293 [2024-07-25 12:44:08.514087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.293 [2024-07-25 12:44:08.514113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.293 [2024-07-25 12:44:08.514125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.293 [2024-07-25 12:44:08.532446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.293 [2024-07-25 12:44:08.532473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.293 [2024-07-25 12:44:08.532485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.293 [2024-07-25 12:44:08.551046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.293 [2024-07-25 12:44:08.551074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.293 [2024-07-25 12:44:08.551085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.293 [2024-07-25 12:44:08.564447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.293 [2024-07-25 12:44:08.564473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.293 [2024-07-25 12:44:08.564485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.293 [2024-07-25 12:44:08.582233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.293 [2024-07-25 12:44:08.582261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.293 [2024-07-25 12:44:08.582273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.293 [2024-07-25 12:44:08.600208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.293 [2024-07-25 12:44:08.600236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.293 [2024-07-25 12:44:08.600249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.293 [2024-07-25 12:44:08.614034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.293 [2024-07-25 12:44:08.614062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.293 [2024-07-25 12:44:08.614073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.293 [2024-07-25 12:44:08.632118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.293 [2024-07-25 12:44:08.632147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.294 [2024-07-25 12:44:08.632158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.294 [2024-07-25 12:44:08.652159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.294 [2024-07-25 12:44:08.652193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.294 [2024-07-25 12:44:08.652205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.294 [2024-07-25 12:44:08.668720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.294 [2024-07-25 12:44:08.668747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.294 [2024-07-25 12:44:08.668759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.294 [2024-07-25 12:44:08.683519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.294 [2024-07-25 12:44:08.683553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.294 [2024-07-25 12:44:08.683566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.294 [2024-07-25 12:44:08.701190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.294 [2024-07-25 12:44:08.701218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.294 [2024-07-25 12:44:08.701230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.554 [2024-07-25 12:44:08.714697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.554 [2024-07-25 12:44:08.714725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.554 [2024-07-25 12:44:08.714737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.554 [2024-07-25 12:44:08.734837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.554 [2024-07-25 12:44:08.734865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.554 [2024-07-25 12:44:08.734877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.554 [2024-07-25 12:44:08.748155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.555 [2024-07-25 12:44:08.748182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.555 [2024-07-25 12:44:08.748194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.555 [2024-07-25 12:44:08.765795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.555 [2024-07-25 12:44:08.765821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.555 [2024-07-25 12:44:08.765833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.555 [2024-07-25 12:44:08.781336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.555 [2024-07-25 12:44:08.781364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.555 [2024-07-25 12:44:08.781376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.555 [2024-07-25 12:44:08.795165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.555 [2024-07-25 12:44:08.795195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.555 [2024-07-25 12:44:08.795209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.555 [2024-07-25 12:44:08.815586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde07d0) 00:31:35.555 [2024-07-25 12:44:08.815615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.555 [2024-07-25 12:44:08.815627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.555 00:31:35.555 Latency(us) 00:31:35.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.555 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:35.555 nvme0n1 : 2.01 15705.87 61.35 0.00 0.00 8137.91 4864.79 24903.68 00:31:35.555 =================================================================================================================== 00:31:35.555 Total : 15705.87 61.35 0.00 0.00 8137.91 4864.79 24903.68 00:31:35.555 0 00:31:35.555 12:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:35.555 12:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:35.555 12:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:35.555 | .driver_specific 00:31:35.555 | .nvme_error 00:31:35.555 | .status_code 00:31:35.555 | .command_transient_transport_error' 00:31:35.555 12:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:35.815 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:31:35.815 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 604776 00:31:35.815 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 604776 ']' 00:31:35.815 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 604776 00:31:35.815 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:31:35.815 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:35.815 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 604776 00:31:35.815 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:35.815 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:35.815 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 604776' 00:31:35.815 killing process with pid 604776 00:31:35.815 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 604776 00:31:35.815 Received shutdown signal, test time was about 2.000000 seconds 00:31:35.815 00:31:35.815 Latency(us) 00:31:35.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.815 =================================================================================================================== 00:31:35.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:35.816 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 604776 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=605402 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 605402 /var/tmp/bperf.sock 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 605402 ']' 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:36.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:36.075 12:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:36.075 [2024-07-25 12:44:09.300693] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:31:36.075 [2024-07-25 12:44:09.300746] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605402 ] 00:31:36.075 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:36.075 Zero copy mechanism will not be used. 00:31:36.075 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.075 [2024-07-25 12:44:09.376415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.075 [2024-07-25 12:44:09.453456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.015 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:37.015 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:31:37.015 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:37.015 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:37.015 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:37.015 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.015 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:37.015 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.015 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:37.015 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:37.276 nvme0n1 00:31:37.276 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:37.276 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.276 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:37.276 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.276 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:37.276 12:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:37.537 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:37.537 Zero copy mechanism will not be used. 00:31:37.537 Running I/O for 2 seconds... 00:31:37.537 [2024-07-25 12:44:10.800557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.800606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.800622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.810331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.810364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.810377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.819869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.819900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.819914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.829849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.829877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.829890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.839676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.839705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.839718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.850144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.850172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.850185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.859993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.860021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.860040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.868680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.868708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.868720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.877994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.878023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.878035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.886987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.887015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.887027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.895534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.895569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.895581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.905954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.905984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.905997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.915796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.915824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.915836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.925043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.925071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.925083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.934509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.934537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.934556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.945210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.945239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.945252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.537 [2024-07-25 12:44:10.954723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.537 [2024-07-25 12:44:10.954751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.537 [2024-07-25 12:44:10.954764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.798 [2024-07-25 12:44:10.965318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.798 [2024-07-25 12:44:10.965347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.798 [2024-07-25 12:44:10.965359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.798 [2024-07-25 12:44:10.973616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.798 [2024-07-25 12:44:10.973645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.798 [2024-07-25 12:44:10.973657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.798 [2024-07-25 12:44:10.983249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.798 [2024-07-25 12:44:10.983276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.798 [2024-07-25 12:44:10.983288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:10.992972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:10.993001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:10.993013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.003447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.003475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.003488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.012175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.012204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.012217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.022786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.022814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.022831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.032504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.032532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.032544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.043568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.043596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.043608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.052613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.052642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.052654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.061255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.061282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.061295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.070940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.070967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.070979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.082469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.082496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.082508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.090544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.090579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.090591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.100302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.100329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.100341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.110370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.110403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.110415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.119992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.120021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.120034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.130473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.130502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.130514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.140779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.140807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.140819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.149999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.150028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.150040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.158452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.158482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.158494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.168363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.168392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.168404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.174305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.174333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.174346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.184274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.184303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.184315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.193500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.193529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.193541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.203239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.203268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.203280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.799 [2024-07-25 12:44:11.213742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:37.799 [2024-07-25 12:44:11.213770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.799 [2024-07-25 12:44:11.213782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.060 [2024-07-25 12:44:11.224137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.060 [2024-07-25 12:44:11.224167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.060 [2024-07-25 12:44:11.224180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.060 [2024-07-25 12:44:11.233917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.060 [2024-07-25 12:44:11.233946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.060 [2024-07-25 12:44:11.233958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.060 [2024-07-25 12:44:11.244328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.060 [2024-07-25 12:44:11.244356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.060 [2024-07-25 12:44:11.244369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.060 [2024-07-25 12:44:11.255282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.060 [2024-07-25 12:44:11.255310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.060 [2024-07-25 12:44:11.255323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.060 [2024-07-25 12:44:11.265386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.060 [2024-07-25 12:44:11.265416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.060 [2024-07-25 12:44:11.265428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.060 [2024-07-25 12:44:11.274907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.060 [2024-07-25 12:44:11.274936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.060 [2024-07-25 12:44:11.274953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.060 [2024-07-25 12:44:11.284659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.060 [2024-07-25 12:44:11.284687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.060 [2024-07-25 12:44:11.284700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.060 [2024-07-25 12:44:11.294724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.060 [2024-07-25 12:44:11.294752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.060 [2024-07-25 12:44:11.294764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.060 [2024-07-25 12:44:11.306136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.060 [2024-07-25 12:44:11.306165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.060 [2024-07-25 12:44:11.306177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.315992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.316021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.316034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.325766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.325795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.325807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.336446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.336475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.336488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.348194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.348223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.348235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.360308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.360337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.360349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.372833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.372861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.372874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.385269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.385298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.385311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.397361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.397391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.397403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.410364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.410392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.410405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.423025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.423053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.423065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.435287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.435315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.435327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.448161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.448189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.448201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.460143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.460172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.460184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.061 [2024-07-25 12:44:11.472677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.061 [2024-07-25 12:44:11.472706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.061 [2024-07-25 12:44:11.472723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.485107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.485136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.485149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.498067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.498096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.498108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.508188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.508217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.508229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.518206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.518235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.518247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.527636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.527675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.527687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.537665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.537693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.537705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.546923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.546951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.546964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.556899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.556928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.556941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.565845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.565878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.565891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.576156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.576185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.576197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.586211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.586240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.586253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.594981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.595010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.595022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.604794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.604823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.604835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.614730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.614759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.322 [2024-07-25 12:44:11.614771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.322 [2024-07-25 12:44:11.623899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.322 [2024-07-25 12:44:11.623928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.623940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.323 [2024-07-25 12:44:11.633969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.323 [2024-07-25 12:44:11.633999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.634011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.323 [2024-07-25 12:44:11.644340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.323 [2024-07-25 12:44:11.644368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.644381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.323 [2024-07-25 12:44:11.654705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.323 [2024-07-25 12:44:11.654733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.654746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.323 [2024-07-25 12:44:11.665345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.323 [2024-07-25 12:44:11.665374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.665386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.323 [2024-07-25 12:44:11.675068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.323 [2024-07-25 12:44:11.675097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.675110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.323 [2024-07-25 12:44:11.684231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.323 [2024-07-25 12:44:11.684260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.684272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.323 [2024-07-25 12:44:11.693385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.323 [2024-07-25 12:44:11.693414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.693426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.323 [2024-07-25 12:44:11.702294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.323 [2024-07-25 12:44:11.702322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.702335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.323 [2024-07-25 12:44:11.711733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.323 [2024-07-25 12:44:11.711763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.711775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.323 [2024-07-25 12:44:11.721405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.323 [2024-07-25 12:44:11.721435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.721447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.323 [2024-07-25 12:44:11.731462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.323 [2024-07-25 12:44:11.731491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.323 [2024-07-25 12:44:11.731509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.741400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.741430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.741442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.751632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.751661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.751674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.762069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.762097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.762109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.773226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.773255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.773267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.782888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.782917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.782929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.792961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.792990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.793002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.803374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.803403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.803416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.813298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.813327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.813340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.821782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.821816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.821828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.831842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.831870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.831882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.841278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.841307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.841319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.851368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.851396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.851409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.861054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.861082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.861094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.870756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.870786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.870798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.880304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.880332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.880345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.890393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.890421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.890434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.900924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.900952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.900964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.910212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.910242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.910255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.919958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.919988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.920000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.928228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.928257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.928270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.938125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.938154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.938166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.946904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.946934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.946947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.956756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.956785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.956798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.584 [2024-07-25 12:44:11.966983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.584 [2024-07-25 12:44:11.967012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.584 [2024-07-25 12:44:11.967024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.585 [2024-07-25 12:44:11.976693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.585 [2024-07-25 12:44:11.976723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.585 [2024-07-25 12:44:11.976735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.585 [2024-07-25 12:44:11.986399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.585 [2024-07-25 12:44:11.986428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.585 [2024-07-25 12:44:11.986445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.585 [2024-07-25 12:44:11.994848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.585 [2024-07-25 12:44:11.994877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.585 [2024-07-25 12:44:11.994889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.585 [2024-07-25 12:44:12.002256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.585 [2024-07-25 12:44:12.002284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.585 [2024-07-25 12:44:12.002296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.845 [2024-07-25 12:44:12.011324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.845 [2024-07-25 12:44:12.011353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.845 [2024-07-25 12:44:12.011366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.845 [2024-07-25 12:44:12.022256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.845 [2024-07-25 12:44:12.022284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.845 [2024-07-25 12:44:12.022297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.845 [2024-07-25 12:44:12.032355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.845 [2024-07-25 12:44:12.032384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.845 [2024-07-25 12:44:12.032397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.845 [2024-07-25 12:44:12.042010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.042039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.042051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.052356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.052384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.052396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.061464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.061493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.061506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.071342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.071371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.071384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.081264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.081292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.081304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.090249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.090277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.090289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.097224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.097253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.097265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.107897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.107925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.107937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.120534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.120570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.120582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.132133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.132161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.132173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.143411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.143438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.143450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.152089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.152118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.152135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.160920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.160950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.160963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.169726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.169755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.169767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.180031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.180059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.180072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.189382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.189409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.189422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.198113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.198143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.198155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.208636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.208666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.208679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.218635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.218665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.218677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.228553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.228582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.228595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.239630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.239663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.239675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.248670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.248698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.248711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:38.846 [2024-07-25 12:44:12.257439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:38.846 [2024-07-25 12:44:12.257467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:38.846 [2024-07-25 12:44:12.257480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.267313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.267342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.267354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.277537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.277573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.277585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.287210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.287239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.287251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.297151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.297180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.297193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.302797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.302826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.302838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.310846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.310875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.310888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.319642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.319671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.319684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.328479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.328509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.328521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.338171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.338200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.338212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.347682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.347712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.347724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.356259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.356288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.356301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.365365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.365395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.365407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.374490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.374520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.374532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.383600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.383629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.383642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.393800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.393829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.393847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.404081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.404111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.404123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.413777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.413806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.413818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.423630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.423658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.423671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.434246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.434275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.434288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.443436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.443466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.443478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.452929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.452959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.452971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.462526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.462564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.462576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.471780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.471809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.471822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.481496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.108 [2024-07-25 12:44:12.481529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.108 [2024-07-25 12:44:12.481541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.108 [2024-07-25 12:44:12.490909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.109 [2024-07-25 12:44:12.490939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.109 [2024-07-25 12:44:12.490951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.109 [2024-07-25 12:44:12.500950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.109 [2024-07-25 12:44:12.500979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.109 [2024-07-25 12:44:12.500991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.109 [2024-07-25 12:44:12.511093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.109 [2024-07-25 12:44:12.511123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.109 [2024-07-25 12:44:12.511135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.109 [2024-07-25 12:44:12.519657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.109 [2024-07-25 12:44:12.519686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.109 [2024-07-25 12:44:12.519698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.369 [2024-07-25 12:44:12.528318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.369 [2024-07-25 12:44:12.528347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.369 [2024-07-25 12:44:12.528359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.369 [2024-07-25 12:44:12.537105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.369 [2024-07-25 12:44:12.537134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.369 [2024-07-25 12:44:12.537146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.369 [2024-07-25 12:44:12.546483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.369 [2024-07-25 12:44:12.546512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.369 [2024-07-25 12:44:12.546524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.555668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.555697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.555714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.565793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.565823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.565836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.574347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.574376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.574388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.582207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.582237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.582249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.591607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.591636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.591648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.601629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.601657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.601670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.612068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.612097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.612109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.621882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.621913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.621926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.632285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.632314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.632326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.638133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.638166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.638178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.644422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.644450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.644462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.653734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.653764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.653776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.663375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.663404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.663417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.671485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.671514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.671526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.679987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.680017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.680029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.690676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.690706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.690718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.703418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.703448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.703460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.715415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.715445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.715457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.721757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.721785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.721797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.726834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.726864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.726875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.736650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.736679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.736691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.745558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.745587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.745600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.755131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.755160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.755173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.763718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.763747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.763759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.773499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.370 [2024-07-25 12:44:12.773528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.370 [2024-07-25 12:44:12.773541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:39.370 [2024-07-25 12:44:12.783581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.371 [2024-07-25 12:44:12.783610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.371 [2024-07-25 12:44:12.783622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:39.631 [2024-07-25 12:44:12.794093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209b330) 00:31:39.631 [2024-07-25 12:44:12.794122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.631 [2024-07-25 12:44:12.794138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:39.631 00:31:39.631 Latency(us) 00:31:39.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.631 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:39.631 nvme0n1 : 2.01 3175.16 396.89 0.00 0.00 5031.78 825.50 13208.02 00:31:39.631 =================================================================================================================== 00:31:39.631 Total : 3175.16 396.89 0.00 0.00 5031.78 825.50 13208.02 00:31:39.631 0 00:31:39.631 12:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:39.631 12:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:39.631 12:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:39.631 | .driver_specific 00:31:39.631 | .nvme_error 00:31:39.631 | .status_code 00:31:39.631 | .command_transient_transport_error' 00:31:39.631 12:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:39.631 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:31:39.631 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 605402 00:31:39.631 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 605402 ']' 00:31:39.631 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 605402 00:31:39.631 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:31:39.631 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:39.631 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 605402 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 605402' 00:31:39.892 killing process with pid 605402 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 605402 00:31:39.892 Received shutdown signal, test time was about 2.000000 seconds 00:31:39.892 00:31:39.892 Latency(us) 00:31:39.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.892 =================================================================================================================== 00:31:39.892 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 605402 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=606020 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 606020 /var/tmp/bperf.sock 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 606020 ']' 00:31:39.892 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:39.893 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:39.893 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:39.893 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:39.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:39.893 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:39.893 12:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:39.893 [2024-07-25 12:44:13.271480] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:31:39.893 [2024-07-25 12:44:13.271534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606020 ] 00:31:39.893 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.152 [2024-07-25 12:44:13.348770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.152 [2024-07-25 12:44:13.425946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.721 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:40.721 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:31:40.721 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:40.721 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:40.982 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:40.982 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.982 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:40.982 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.982 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:40.982 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:41.242 nvme0n1 00:31:41.242 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:41.242 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.242 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:41.242 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.242 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:41.242 12:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:41.502 Running I/O for 2 seconds... 00:31:41.502 [2024-07-25 12:44:14.707169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f4f40 00:31:41.502 [2024-07-25 12:44:14.708360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.708405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.721523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e95a0 00:31:41.502 [2024-07-25 12:44:14.722799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.722828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.732402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ea680 00:31:41.502 [2024-07-25 12:44:14.733651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.733678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.743647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f57b0 00:31:41.502 [2024-07-25 12:44:14.745029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.745055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.752620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ee5c8 00:31:41.502 [2024-07-25 12:44:14.753433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.753459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.763532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ec408 00:31:41.502 [2024-07-25 12:44:14.764330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.764356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.775031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eff18 00:31:41.502 [2024-07-25 12:44:14.775879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.775907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.785915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f35f0 00:31:41.502 [2024-07-25 12:44:14.787228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.787254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.797089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e12d8 00:31:41.502 [2024-07-25 12:44:14.798176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.798208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.808576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f2d80 00:31:41.502 [2024-07-25 12:44:14.809820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.809847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.819453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e6738 00:31:41.502 [2024-07-25 12:44:14.821008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.821035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.830778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f6020 00:31:41.502 [2024-07-25 12:44:14.832146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.832172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.839770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f57b0 00:31:41.502 [2024-07-25 12:44:14.840646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.840672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.850630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ed4e8 00:31:41.502 [2024-07-25 12:44:14.851375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.851401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.861448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eaab8 00:31:41.502 [2024-07-25 12:44:14.862218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.862244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.872903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ee190 00:31:41.502 [2024-07-25 12:44:14.874109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.874135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.884041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e6738 00:31:41.502 [2024-07-25 12:44:14.885201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.885227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.894899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ee190 00:31:41.502 [2024-07-25 12:44:14.896064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.502 [2024-07-25 12:44:14.896090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:41.502 [2024-07-25 12:44:14.906041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eea00 00:31:41.503 [2024-07-25 12:44:14.907301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.503 [2024-07-25 12:44:14.907328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:41.503 [2024-07-25 12:44:14.917051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e4578 00:31:41.503 [2024-07-25 12:44:14.918209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.503 [2024-07-25 12:44:14.918235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:14.928187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e4de8 00:31:41.762 [2024-07-25 12:44:14.929626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:14.929652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:14.937168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e4de8 00:31:41.762 [2024-07-25 12:44:14.938051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:14.938077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:14.948165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f3a28 00:31:41.762 [2024-07-25 12:44:14.949364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:14.949390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:14.959463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f7100 00:31:41.762 [2024-07-25 12:44:14.960536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:14.960569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:14.970230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e88f8 00:31:41.762 [2024-07-25 12:44:14.971238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:14.971264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:14.981569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e6300 00:31:41.762 [2024-07-25 12:44:14.982724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:14.982751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:14.992465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eea00 00:31:41.762 [2024-07-25 12:44:14.993613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:14.993639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.003462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e84c0 00:31:41.762 [2024-07-25 12:44:15.005065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.005092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.012968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e7818 00:31:41.762 [2024-07-25 12:44:15.013800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.013828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.023837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e6b70 00:31:41.762 [2024-07-25 12:44:15.024551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.024578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.035117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ec840 00:31:41.762 [2024-07-25 12:44:15.036070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.036096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.045939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e5a90 00:31:41.762 [2024-07-25 12:44:15.046959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.046985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.056748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190edd58 00:31:41.762 [2024-07-25 12:44:15.057556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.057583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.067558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f4f40 00:31:41.762 [2024-07-25 12:44:15.068419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.068445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.078699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190edd58 00:31:41.762 [2024-07-25 12:44:15.079758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.079789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.089705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e8088 00:31:41.762 [2024-07-25 12:44:15.090718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.090745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.100993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e8d30 00:31:41.762 [2024-07-25 12:44:15.102229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.102255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.111791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e2c28 00:31:41.762 [2024-07-25 12:44:15.113142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.113168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.123120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eaab8 00:31:41.762 [2024-07-25 12:44:15.124480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.124506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.132082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f46d0 00:31:41.762 [2024-07-25 12:44:15.132902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.132928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.143079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f2948 00:31:41.762 [2024-07-25 12:44:15.144171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.144197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.154569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e2c28 00:31:41.762 [2024-07-25 12:44:15.155536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.762 [2024-07-25 12:44:15.155568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:41.762 [2024-07-25 12:44:15.165198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e5a90 00:31:41.763 [2024-07-25 12:44:15.166136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.763 [2024-07-25 12:44:15.166161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:41.763 [2024-07-25 12:44:15.176056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e95a0 00:31:41.763 [2024-07-25 12:44:15.177004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.763 [2024-07-25 12:44:15.177030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.187540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eaab8 00:31:42.022 [2024-07-25 12:44:15.188676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.188702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.198211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ebb98 00:31:42.022 [2024-07-25 12:44:15.199285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.199311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.209195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ee190 00:31:42.022 [2024-07-25 12:44:15.210580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.210606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.220578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e6fa8 00:31:42.022 [2024-07-25 12:44:15.221793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.221820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.231249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e5658 00:31:42.022 [2024-07-25 12:44:15.232490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.232516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.242783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e4578 00:31:42.022 [2024-07-25 12:44:15.244194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.244220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.253462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ed4e8 00:31:42.022 [2024-07-25 12:44:15.254826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.254851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.264320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e2c28 00:31:42.022 [2024-07-25 12:44:15.265672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.265698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.275154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f0788 00:31:42.022 [2024-07-25 12:44:15.276505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.276531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.284588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ed4e8 00:31:42.022 [2024-07-25 12:44:15.285965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.285991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.295925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eee38 00:31:42.022 [2024-07-25 12:44:15.296945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.296971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.306589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f1ca0 00:31:42.022 [2024-07-25 12:44:15.307575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.307601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.317422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f7da8 00:31:42.022 [2024-07-25 12:44:15.318416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.318443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.328875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ebb98 00:31:42.022 [2024-07-25 12:44:15.330003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.330029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.339674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f1430 00:31:42.022 [2024-07-25 12:44:15.340809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.340835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.350329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ec840 00:31:42.022 [2024-07-25 12:44:15.351450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.351477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.361913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190fb8b8 00:31:42.022 [2024-07-25 12:44:15.363061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.363091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.372796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f5378 00:31:42.022 [2024-07-25 12:44:15.374360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.374386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.383981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f2510 00:31:42.022 [2024-07-25 12:44:15.385379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.385405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.393154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e84c0 00:31:42.022 [2024-07-25 12:44:15.394066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.394091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.403993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e88f8 00:31:42.022 [2024-07-25 12:44:15.404926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.404953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.415289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e38d0 00:31:42.022 [2024-07-25 12:44:15.416340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.416366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.426164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ea680 00:31:42.022 [2024-07-25 12:44:15.427109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.427136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:42.022 [2024-07-25 12:44:15.437464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eb328 00:31:42.022 [2024-07-25 12:44:15.438652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-25 12:44:15.438677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.448321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e4140 00:31:42.281 [2024-07-25 12:44:15.449551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.449577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.459450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e38d0 00:31:42.281 [2024-07-25 12:44:15.460784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.460810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.471027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f4f40 00:31:42.281 [2024-07-25 12:44:15.472497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.472523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.480013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eee38 00:31:42.281 [2024-07-25 12:44:15.480898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.480924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.490745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e8d30 00:31:42.281 [2024-07-25 12:44:15.491655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.491682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.501614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e12d8 00:31:42.281 [2024-07-25 12:44:15.502523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.502553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.513082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eee38 00:31:42.281 [2024-07-25 12:44:15.514145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.514171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.523774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e8d30 00:31:42.281 [2024-07-25 12:44:15.524826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.524852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.534611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f6890 00:31:42.281 [2024-07-25 12:44:15.535650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.535675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.545447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ec408 00:31:42.281 [2024-07-25 12:44:15.546490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.546518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.556462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e12d8 00:31:42.281 [2024-07-25 12:44:15.557831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.557857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.567782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f6890 00:31:42.281 [2024-07-25 12:44:15.568943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.568969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.578431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ea680 00:31:42.281 [2024-07-25 12:44:15.579613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.579640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.589451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eb328 00:31:42.281 [2024-07-25 12:44:15.591010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.591036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.600651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eea00 00:31:42.281 [2024-07-25 12:44:15.601975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.602001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.611469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f7da8 00:31:42.281 [2024-07-25 12:44:15.612816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.612841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.622286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f0bc0 00:31:42.281 [2024-07-25 12:44:15.623619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.623646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.633081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e9e10 00:31:42.281 [2024-07-25 12:44:15.634409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.634435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.642069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f0350 00:31:42.281 [2024-07-25 12:44:15.642889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.642919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.653048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e6fa8 00:31:42.281 [2024-07-25 12:44:15.653768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.653794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.664223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e3498 00:31:42.281 [2024-07-25 12:44:15.665184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.665210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.675246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ebb98 00:31:42.281 [2024-07-25 12:44:15.676253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.676280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.686056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f1ca0 00:31:42.281 [2024-07-25 12:44:15.687035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.687061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:42.281 [2024-07-25 12:44:15.697518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e6738 00:31:42.281 [2024-07-25 12:44:15.698593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.281 [2024-07-25 12:44:15.698620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:42.540 [2024-07-25 12:44:15.708588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e99d8 00:31:42.540 [2024-07-25 12:44:15.709826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.540 [2024-07-25 12:44:15.709853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:42.540 [2024-07-25 12:44:15.719621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f1ca0 00:31:42.540 [2024-07-25 12:44:15.720784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.540 [2024-07-25 12:44:15.720811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.540 [2024-07-25 12:44:15.731051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e0a68 00:31:42.540 [2024-07-25 12:44:15.732445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.540 [2024-07-25 12:44:15.732471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.540 [2024-07-25 12:44:15.741872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e1b48 00:31:42.540 [2024-07-25 12:44:15.743258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.540 [2024-07-25 12:44:15.743284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.540 [2024-07-25 12:44:15.750861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ea248 00:31:42.540 [2024-07-25 12:44:15.751675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.540 [2024-07-25 12:44:15.751701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:42.540 [2024-07-25 12:44:15.761553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f1430 00:31:42.540 [2024-07-25 12:44:15.762370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.540 [2024-07-25 12:44:15.762396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:42.540 [2024-07-25 12:44:15.772560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190fb8b8 00:31:42.540 [2024-07-25 12:44:15.773544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.773576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.783356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f7da8 00:31:42.541 [2024-07-25 12:44:15.784220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.784245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.794182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e3060 00:31:42.541 [2024-07-25 12:44:15.794946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.794972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.805498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ed0b0 00:31:42.541 [2024-07-25 12:44:15.806487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.806512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.816352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ef6a8 00:31:42.541 [2024-07-25 12:44:15.817362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.817388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.827165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ec840 00:31:42.541 [2024-07-25 12:44:15.828039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.828064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.839522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190fa3a0 00:31:42.541 [2024-07-25 12:44:15.840617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.840644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.851027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190fda78 00:31:42.541 [2024-07-25 12:44:15.852310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.852335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.861752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e6fa8 00:31:42.541 [2024-07-25 12:44:15.863008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.863036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.873028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190dece0 00:31:42.541 [2024-07-25 12:44:15.874419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.874446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.882306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ea680 00:31:42.541 [2024-07-25 12:44:15.883219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.883245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.893193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f5be8 00:31:42.541 [2024-07-25 12:44:15.893972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.893998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.904481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f31b8 00:31:42.541 [2024-07-25 12:44:15.905495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.905520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.915344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ea680 00:31:42.541 [2024-07-25 12:44:15.916417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.916443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.926523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f9b30 00:31:42.541 [2024-07-25 12:44:15.927677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.927703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.937579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f7100 00:31:42.541 [2024-07-25 12:44:15.938767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.938793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.948867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190df118 00:31:42.541 [2024-07-25 12:44:15.950199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.541 [2024-07-25 12:44:15.950225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:42.541 [2024-07-25 12:44:15.959916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190fe720 00:31:42.801 [2024-07-25 12:44:15.961152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.801 [2024-07-25 12:44:15.961180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:42.801 [2024-07-25 12:44:15.971088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e01f8 00:31:42.801 [2024-07-25 12:44:15.972599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.801 [2024-07-25 12:44:15.972625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:42.801 [2024-07-25 12:44:15.980302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ea248 00:31:42.801 [2024-07-25 12:44:15.981219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.801 [2024-07-25 12:44:15.981245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.801 [2024-07-25 12:44:15.991018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190fb480 00:31:42.801 [2024-07-25 12:44:15.991934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.801 [2024-07-25 12:44:15.991960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.801 [2024-07-25 12:44:16.003385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e5658 00:31:42.801 [2024-07-25 12:44:16.004539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.801 [2024-07-25 12:44:16.004570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:42.801 [2024-07-25 12:44:16.014250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f5be8 00:31:42.801 [2024-07-25 12:44:16.015345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.801 [2024-07-25 12:44:16.015371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:42.801 [2024-07-25 12:44:16.025110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e9e10 00:31:42.801 [2024-07-25 12:44:16.026107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.801 [2024-07-25 12:44:16.026141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:42.801 [2024-07-25 12:44:16.036276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190fbcf0 00:31:42.801 [2024-07-25 12:44:16.037575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.801 [2024-07-25 12:44:16.037602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.045778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e4de8 00:31:42.802 [2024-07-25 12:44:16.046540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.046574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.057255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190df118 00:31:42.802 [2024-07-25 12:44:16.058315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.058341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.067962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e6300 00:31:42.802 [2024-07-25 12:44:16.068978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.069004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.078800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e2c28 00:31:42.802 [2024-07-25 12:44:16.079806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.079832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.089640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ebb98 00:31:42.802 [2024-07-25 12:44:16.090613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.090639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.101110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f3e60 00:31:42.802 [2024-07-25 12:44:16.102326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.102352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.111809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190eaef0 00:31:42.802 [2024-07-25 12:44:16.112960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.112986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.122641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f96f8 00:31:42.802 [2024-07-25 12:44:16.123760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.123786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.134121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ea680 00:31:42.802 [2024-07-25 12:44:16.135430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.135456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.144836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e8088 00:31:42.802 [2024-07-25 12:44:16.146131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.146157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.157117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f8e88 00:31:42.802 [2024-07-25 12:44:16.158949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.158975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.166024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f57b0 00:31:42.802 [2024-07-25 12:44:16.166832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.166859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.176855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f57b0 00:31:42.802 [2024-07-25 12:44:16.177619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.177645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.187711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f57b0 00:31:42.802 [2024-07-25 12:44:16.188482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.188507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.198537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f57b0 00:31:42.802 [2024-07-25 12:44:16.199331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.199356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.209400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f57b0 00:31:42.802 [2024-07-25 12:44:16.210188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.802 [2024-07-25 12:44:16.210214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:42.802 [2024-07-25 12:44:16.220257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f57b0 00:31:43.062 [2024-07-25 12:44:16.221035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.062 [2024-07-25 12:44:16.221061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:43.062 [2024-07-25 12:44:16.231101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f57b0 00:31:43.062 [2024-07-25 12:44:16.231876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.062 [2024-07-25 12:44:16.231902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:43.062 [2024-07-25 12:44:16.242554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190fef90 00:31:43.062 [2024-07-25 12:44:16.243498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.062 [2024-07-25 12:44:16.243523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:43.062 [2024-07-25 12:44:16.254110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e27f0 00:31:43.062 [2024-07-25 12:44:16.255182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.062 [2024-07-25 12:44:16.255208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:43.062 [2024-07-25 12:44:16.264947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e5ec8 00:31:43.062 [2024-07-25 12:44:16.266046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.062 [2024-07-25 12:44:16.266072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:43.062 [2024-07-25 12:44:16.275983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190fd208 00:31:43.063 [2024-07-25 12:44:16.277590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.277617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.287178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e49b0 00:31:43.063 [2024-07-25 12:44:16.288420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.288446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.298039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f35f0 00:31:43.063 [2024-07-25 12:44:16.299295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.299320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.308894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ddc00 00:31:43.063 [2024-07-25 12:44:16.310028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.310058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.320275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f1430 00:31:43.063 [2024-07-25 12:44:16.321587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.321613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.330136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f20d8 00:31:43.063 [2024-07-25 12:44:16.331273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.331300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.340825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f7da8 00:31:43.063 [2024-07-25 12:44:16.341739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.341765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.352215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f6020 00:31:43.063 [2024-07-25 12:44:16.353420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.353447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.363406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f7da8 00:31:43.063 [2024-07-25 12:44:16.364768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.364793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.374723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f1430 00:31:43.063 [2024-07-25 12:44:16.375995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.376021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.385537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e5220 00:31:43.063 [2024-07-25 12:44:16.386816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.386841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.396851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ddc00 00:31:43.063 [2024-07-25 12:44:16.398219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.398245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.405670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e5220 00:31:43.063 [2024-07-25 12:44:16.406481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.406508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.417169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e99d8 00:31:43.063 [2024-07-25 12:44:16.418246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.418272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.428092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e49b0 00:31:43.063 [2024-07-25 12:44:16.429138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.429163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.438788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f1430 00:31:43.063 [2024-07-25 12:44:16.439826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.439852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.450127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f20d8 00:31:43.063 [2024-07-25 12:44:16.451216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.451242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.461518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190ef6a8 00:31:43.063 [2024-07-25 12:44:16.462760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.462786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:43.063 [2024-07-25 12:44:16.470717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190df550 00:31:43.063 [2024-07-25 12:44:16.471431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.063 [2024-07-25 12:44:16.471457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.482115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e99d8 00:31:43.324 [2024-07-25 12:44:16.483007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.483033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.494271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e1b48 00:31:43.324 [2024-07-25 12:44:16.495967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.495993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.505491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190df118 00:31:43.324 [2024-07-25 12:44:16.506829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.506855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.514885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f6cc8 00:31:43.324 [2024-07-25 12:44:16.515802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.515828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.525767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190dfdc0 00:31:43.324 [2024-07-25 12:44:16.527277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.527303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.536973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e95a0 00:31:43.324 [2024-07-25 12:44:16.537857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.537883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.547799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e95a0 00:31:43.324 [2024-07-25 12:44:16.548813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.548838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.558646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e95a0 00:31:43.324 [2024-07-25 12:44:16.559651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.559676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.569475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e95a0 00:31:43.324 [2024-07-25 12:44:16.570483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.570508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.580306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e95a0 00:31:43.324 [2024-07-25 12:44:16.581296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.581322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.591153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e95a0 00:31:43.324 [2024-07-25 12:44:16.592169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.592199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.601982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e95a0 00:31:43.324 [2024-07-25 12:44:16.603005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.603031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.612844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e95a0 00:31:43.324 [2024-07-25 12:44:16.613850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.613876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.623655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e95a0 00:31:43.324 [2024-07-25 12:44:16.624657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.324 [2024-07-25 12:44:16.624682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:43.324 [2024-07-25 12:44:16.634834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e0a68 00:31:43.325 [2024-07-25 12:44:16.635962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.325 [2024-07-25 12:44:16.635987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:43.325 [2024-07-25 12:44:16.645865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e0a68 00:31:43.325 [2024-07-25 12:44:16.646984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.325 [2024-07-25 12:44:16.647009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:43.325 [2024-07-25 12:44:16.656702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f2948 00:31:43.325 [2024-07-25 12:44:16.657813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.325 [2024-07-25 12:44:16.657839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:43.325 [2024-07-25 12:44:16.667558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190e3498 00:31:43.325 [2024-07-25 12:44:16.668668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.325 [2024-07-25 12:44:16.668694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:43.325 [2024-07-25 12:44:16.678433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f9b30 00:31:43.325 [2024-07-25 12:44:16.679584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.325 [2024-07-25 12:44:16.679610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:43.325 [2024-07-25 12:44:16.689271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbbec0) with pdu=0x2000190f92c0 00:31:43.325 [2024-07-25 12:44:16.690387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:43.325 [2024-07-25 12:44:16.690412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:43.325 00:31:43.325 Latency(us) 00:31:43.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.325 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:43.325 nvme0n1 : 2.00 23411.07 91.45 0.00 0.00 5457.60 3377.62 13107.20 00:31:43.325 =================================================================================================================== 00:31:43.325 Total : 23411.07 91.45 0.00 0.00 5457.60 3377.62 13107.20 00:31:43.325 0 00:31:43.325 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:43.325 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:43.325 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:43.325 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:43.325 | .driver_specific 00:31:43.325 | .nvme_error 00:31:43.325 | .status_code 00:31:43.325 | .command_transient_transport_error' 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 183 > 0 )) 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 606020 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 606020 ']' 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 606020 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 606020 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 606020' 00:31:43.584 killing process with pid 606020 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 606020 00:31:43.584 Received shutdown signal, test time was about 2.000000 seconds 00:31:43.584 00:31:43.584 Latency(us) 00:31:43.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.584 =================================================================================================================== 00:31:43.584 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:43.584 12:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 606020 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=606643 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 606643 /var/tmp/bperf.sock 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 606643 ']' 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:43.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:43.844 12:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:43.844 [2024-07-25 12:44:17.180317] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:31:43.844 [2024-07-25 12:44:17.180369] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606643 ] 00:31:43.844 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:43.844 Zero copy mechanism will not be used. 00:31:43.844 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.844 [2024-07-25 12:44:17.256955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.103 [2024-07-25 12:44:17.336217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.672 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:44.672 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:31:44.672 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:44.672 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:44.933 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:44.933 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.933 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:44.933 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.933 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:44.933 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:45.193 nvme0n1 00:31:45.193 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:45.193 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.193 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:45.193 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.193 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:45.193 12:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:45.453 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:45.453 Zero copy mechanism will not be used. 00:31:45.453 Running I/O for 2 seconds... 00:31:45.453 [2024-07-25 12:44:18.689672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.453 [2024-07-25 12:44:18.690219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.453 [2024-07-25 12:44:18.690264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.453 [2024-07-25 12:44:18.700418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.453 [2024-07-25 12:44:18.700801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.453 [2024-07-25 12:44:18.700832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.453 [2024-07-25 12:44:18.711410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.453 [2024-07-25 12:44:18.711799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.453 [2024-07-25 12:44:18.711829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:45.453 [2024-07-25 12:44:18.722492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.453 [2024-07-25 12:44:18.722897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.453 [2024-07-25 12:44:18.722925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.453 [2024-07-25 12:44:18.733236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.453 [2024-07-25 12:44:18.733664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.453 [2024-07-25 12:44:18.733692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.453 [2024-07-25 12:44:18.743788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.453 [2024-07-25 12:44:18.744125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.453 [2024-07-25 12:44:18.744152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.453 [2024-07-25 12:44:18.754772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.453 [2024-07-25 12:44:18.755145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.453 [2024-07-25 12:44:18.755173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:45.453 [2024-07-25 12:44:18.765382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.453 [2024-07-25 12:44:18.766004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.453 [2024-07-25 12:44:18.766038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.453 [2024-07-25 12:44:18.776411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.453 [2024-07-25 12:44:18.776835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.453 [2024-07-25 12:44:18.776863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.453 [2024-07-25 12:44:18.786517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.453 [2024-07-25 12:44:18.786854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.453 [2024-07-25 12:44:18.786882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.453 [2024-07-25 12:44:18.797261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.454 [2024-07-25 12:44:18.797627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.454 [2024-07-25 12:44:18.797655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:45.454 [2024-07-25 12:44:18.808013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.454 [2024-07-25 12:44:18.808390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.454 [2024-07-25 12:44:18.808417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.454 [2024-07-25 12:44:18.819135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.454 [2024-07-25 12:44:18.819697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.454 [2024-07-25 12:44:18.819726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.454 [2024-07-25 12:44:18.830207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.454 [2024-07-25 12:44:18.830592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.454 [2024-07-25 12:44:18.830620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.454 [2024-07-25 12:44:18.840816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.454 [2024-07-25 12:44:18.841201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.454 [2024-07-25 12:44:18.841229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:45.454 [2024-07-25 12:44:18.851420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.454 [2024-07-25 12:44:18.851880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.454 [2024-07-25 12:44:18.851909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.454 [2024-07-25 12:44:18.862756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.454 [2024-07-25 12:44:18.863154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.454 [2024-07-25 12:44:18.863182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.717 [2024-07-25 12:44:18.873794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.717 [2024-07-25 12:44:18.874201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.717 [2024-07-25 12:44:18.874228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.717 [2024-07-25 12:44:18.884700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.717 [2024-07-25 12:44:18.885141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.717 [2024-07-25 12:44:18.885169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:45.717 [2024-07-25 12:44:18.895578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.717 [2024-07-25 12:44:18.895956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.717 [2024-07-25 12:44:18.895984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.717 [2024-07-25 12:44:18.906358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:18.906709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:18.906736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:18.917472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:18.917933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:18.917960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:18.928854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:18.929382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:18.929410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:18.939867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:18.940316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:18.940343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:18.948321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:18.948676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:18.948703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:18.956276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:18.956637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:18.956664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:18.965817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:18.966158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:18.966185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:18.976197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:18.976609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:18.976636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:18.986627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:18.987010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:18.987037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:18.996945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:18.997335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:18.997362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.007347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.007646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.007673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.017985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.018049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.018074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.029027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.029270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.029296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.039982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.040247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.040279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.051410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.051625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.051653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.061464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.061545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.061577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.072136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.072393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.072418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.082741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.083038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.083064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.093797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.094081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.094108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.104066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.104131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.104157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.114451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.114770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.114797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.718 [2024-07-25 12:44:19.124909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:45.718 [2024-07-25 12:44:19.124986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.718 [2024-07-25 12:44:19.125012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.007 [2024-07-25 12:44:19.135378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.007 [2024-07-25 12:44:19.135661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.007 [2024-07-25 12:44:19.135689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.007 [2024-07-25 12:44:19.146078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.007 [2024-07-25 12:44:19.146368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.007 [2024-07-25 12:44:19.146394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.007 [2024-07-25 12:44:19.156134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.007 [2024-07-25 12:44:19.156467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.007 [2024-07-25 12:44:19.156494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.007 [2024-07-25 12:44:19.165590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.007 [2024-07-25 12:44:19.165836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.007 [2024-07-25 12:44:19.165862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.007 [2024-07-25 12:44:19.176325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.007 [2024-07-25 12:44:19.176609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.176635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.187362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.187674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.187701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.197991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.198263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.198290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.206609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.206719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.206745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.215039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.215138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.215164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.225638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.225700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.225727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.236393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.236643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.236670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.247705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.248003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.248029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.258339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.258409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.258436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.268774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.269001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.269027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.276671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.276970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.276996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.283921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.283999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.284023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.289741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.289803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.289827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.294316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.294376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.294404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.300208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.300270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.300294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.305866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.305963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.305987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.312692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.312753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.312778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.322234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.322308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.322334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.329179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.329275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.329300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.337694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.337768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.337793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.344786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.345018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.345044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.350790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.350886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.350911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.356530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.356629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.356654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.364689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.364757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.364781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.373295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.373618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.373645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.380365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.008 [2024-07-25 12:44:19.380423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.008 [2024-07-25 12:44:19.380448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.008 [2024-07-25 12:44:19.386469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.009 [2024-07-25 12:44:19.386541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.009 [2024-07-25 12:44:19.386572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.009 [2024-07-25 12:44:19.394336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.009 [2024-07-25 12:44:19.394410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.009 [2024-07-25 12:44:19.394435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.009 [2024-07-25 12:44:19.401364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.009 [2024-07-25 12:44:19.401462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.009 [2024-07-25 12:44:19.401488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.009 [2024-07-25 12:44:19.408406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.009 [2024-07-25 12:44:19.408509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.009 [2024-07-25 12:44:19.408534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.009 [2024-07-25 12:44:19.415911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.009 [2024-07-25 12:44:19.415971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.009 [2024-07-25 12:44:19.415996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.421984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.422056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.422081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.427438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.427500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.427524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.433575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.433637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.433661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.440278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.440365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.440391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.448099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.448171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.448196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.453576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.453657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.453681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.457873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.457955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.457979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.462226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.462292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.462316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.466649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.466727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.466755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.471971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.472045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.472069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.477734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.477795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.477820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.483674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.483739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.483763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.493924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.494001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.494025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.504030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.504143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.504168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.515523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.515795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.515821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.527309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.527609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.527636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.539198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.539442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.539469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.551309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.551381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.551407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.562778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.562862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.562887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.574872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.574939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.574964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.586580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.586923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.291 [2024-07-25 12:44:19.586950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.291 [2024-07-25 12:44:19.598668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.291 [2024-07-25 12:44:19.598859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.292 [2024-07-25 12:44:19.598885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.292 [2024-07-25 12:44:19.609676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.292 [2024-07-25 12:44:19.610006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.292 [2024-07-25 12:44:19.610032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.292 [2024-07-25 12:44:19.621426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.292 [2024-07-25 12:44:19.621740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.292 [2024-07-25 12:44:19.621766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.292 [2024-07-25 12:44:19.633289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.292 [2024-07-25 12:44:19.633383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.292 [2024-07-25 12:44:19.633408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.292 [2024-07-25 12:44:19.644485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.292 [2024-07-25 12:44:19.644562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.292 [2024-07-25 12:44:19.644587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.292 [2024-07-25 12:44:19.656059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.292 [2024-07-25 12:44:19.656388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.292 [2024-07-25 12:44:19.656414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.292 [2024-07-25 12:44:19.667914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.292 [2024-07-25 12:44:19.668193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.292 [2024-07-25 12:44:19.668219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.292 [2024-07-25 12:44:19.679152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.292 [2024-07-25 12:44:19.679221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.292 [2024-07-25 12:44:19.679246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.292 [2024-07-25 12:44:19.691007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.292 [2024-07-25 12:44:19.691324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.292 [2024-07-25 12:44:19.691349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.292 [2024-07-25 12:44:19.702463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.292 [2024-07-25 12:44:19.702803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.292 [2024-07-25 12:44:19.702829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.292 [2024-07-25 12:44:19.708423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.292 [2024-07-25 12:44:19.708504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.292 [2024-07-25 12:44:19.708528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.553 [2024-07-25 12:44:19.712719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.553 [2024-07-25 12:44:19.712789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.553 [2024-07-25 12:44:19.712814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.553 [2024-07-25 12:44:19.716961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.553 [2024-07-25 12:44:19.717023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.553 [2024-07-25 12:44:19.717047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.553 [2024-07-25 12:44:19.721300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.553 [2024-07-25 12:44:19.721364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.553 [2024-07-25 12:44:19.721394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.553 [2024-07-25 12:44:19.725884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.553 [2024-07-25 12:44:19.725947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.553 [2024-07-25 12:44:19.725972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.553 [2024-07-25 12:44:19.732671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.553 [2024-07-25 12:44:19.732750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.553 [2024-07-25 12:44:19.732774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.553 [2024-07-25 12:44:19.740101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.553 [2024-07-25 12:44:19.740183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.553 [2024-07-25 12:44:19.740208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.553 [2024-07-25 12:44:19.747037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.553 [2024-07-25 12:44:19.747121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.553 [2024-07-25 12:44:19.747145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.553 [2024-07-25 12:44:19.754877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.553 [2024-07-25 12:44:19.754956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.553 [2024-07-25 12:44:19.754980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.553 [2024-07-25 12:44:19.761641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.553 [2024-07-25 12:44:19.761709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.553 [2024-07-25 12:44:19.761734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.553 [2024-07-25 12:44:19.767637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.553 [2024-07-25 12:44:19.767732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.553 [2024-07-25 12:44:19.767756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.553 [2024-07-25 12:44:19.776372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.776452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.776477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.784741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.784826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.784851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.792338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.792409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.792434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.799349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.799439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.799464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.808061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.808294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.808321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.816260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.816326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.816350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.822899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.822977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.823002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.829790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.829876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.829901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.836760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.836847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.836871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.845544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.845631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.845656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.851962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.852038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.852063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.858339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.858423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.858447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.863206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.863265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.863289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.867829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.867911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.867936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.873672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.873761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.873786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.878534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.878624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.878649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.882931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.882998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.883022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.889026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.889111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.889136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.893977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.894070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.894099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.900075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.900341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.900368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.908591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.908688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.908713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.918330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.918654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.918680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.928121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.928412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.928438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.938409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.938669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.938696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.948927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.949163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.949190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.958805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.554 [2024-07-25 12:44:19.959039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.554 [2024-07-25 12:44:19.959064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.554 [2024-07-25 12:44:19.969070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.555 [2024-07-25 12:44:19.969329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.555 [2024-07-25 12:44:19.969355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.816 [2024-07-25 12:44:19.979053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.816 [2024-07-25 12:44:19.979350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.816 [2024-07-25 12:44:19.979376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.816 [2024-07-25 12:44:19.988941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.816 [2024-07-25 12:44:19.989263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.816 [2024-07-25 12:44:19.989290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.816 [2024-07-25 12:44:19.999223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.816 [2024-07-25 12:44:19.999560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.816 [2024-07-25 12:44:19.999587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.816 [2024-07-25 12:44:20.008588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.816 [2024-07-25 12:44:20.008751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.816 [2024-07-25 12:44:20.008777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.816 [2024-07-25 12:44:20.017100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.816 [2024-07-25 12:44:20.017379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.816 [2024-07-25 12:44:20.017405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.816 [2024-07-25 12:44:20.027324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.816 [2024-07-25 12:44:20.027592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.816 [2024-07-25 12:44:20.027619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.816 [2024-07-25 12:44:20.037709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.816 [2024-07-25 12:44:20.037827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.816 [2024-07-25 12:44:20.037851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.816 [2024-07-25 12:44:20.047018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.816 [2024-07-25 12:44:20.047180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.816 [2024-07-25 12:44:20.047205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.816 [2024-07-25 12:44:20.055744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.055980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.056006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.063598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.063793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.063817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.068651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.068792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.068817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.072094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.072232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.072256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.075394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.075542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.075574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.079946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.080220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.080246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.084889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.085005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.085030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.088652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.088787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.088812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.091698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.091903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.091927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.094650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.094793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.094822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.097899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.098081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.098106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.100925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.101100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.101125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.104407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.104508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.104534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.107614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.107735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.107760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.113227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.113333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.113357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.116637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.116727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.116750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.119795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.119915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.119939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.123139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.123277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.123301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.126300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.126409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.126434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.129577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.129684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.129708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.132639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.132755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.132779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.135820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.135926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.135950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.140284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.140572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.140598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.147636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.147845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.147869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.155100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.155353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.155382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.817 [2024-07-25 12:44:20.162157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.817 [2024-07-25 12:44:20.162390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.817 [2024-07-25 12:44:20.162417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.818 [2024-07-25 12:44:20.168577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.818 [2024-07-25 12:44:20.168667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.818 [2024-07-25 12:44:20.168692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.818 [2024-07-25 12:44:20.175866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.818 [2024-07-25 12:44:20.176081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.818 [2024-07-25 12:44:20.176105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.818 [2024-07-25 12:44:20.183090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.818 [2024-07-25 12:44:20.183189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.818 [2024-07-25 12:44:20.183215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.818 [2024-07-25 12:44:20.191372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.818 [2024-07-25 12:44:20.191462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.818 [2024-07-25 12:44:20.191487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.818 [2024-07-25 12:44:20.197718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.818 [2024-07-25 12:44:20.197921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.818 [2024-07-25 12:44:20.197945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.818 [2024-07-25 12:44:20.206213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.818 [2024-07-25 12:44:20.206314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.818 [2024-07-25 12:44:20.206340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.818 [2024-07-25 12:44:20.216001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.818 [2024-07-25 12:44:20.216070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.818 [2024-07-25 12:44:20.216094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.818 [2024-07-25 12:44:20.223461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.818 [2024-07-25 12:44:20.223528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.818 [2024-07-25 12:44:20.223561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.818 [2024-07-25 12:44:20.229479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:46.818 [2024-07-25 12:44:20.229772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.818 [2024-07-25 12:44:20.229799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.236494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.236822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.236856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.242662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.242743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.242767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.245916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.246014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.246037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.249369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.249479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.249502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.254692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.254773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.254797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.261620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.261713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.261738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.269312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.269566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.269593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.276826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.277089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.277115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.284141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.284404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.284429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.293425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.293725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.293751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.303297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.303563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.303589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.310400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.310498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.310523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.313552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.313651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.313675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.316824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.316916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.316941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.320163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.320281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.320306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.323206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.323309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.323333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.326706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.327030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.327056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.332585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.332688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.332712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.338724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.338982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.339008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.343036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.343135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.343160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.346077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.346181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.080 [2024-07-25 12:44:20.346205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.080 [2024-07-25 12:44:20.349257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.080 [2024-07-25 12:44:20.349379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.349403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.354975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.355096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.355121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.359401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.359685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.359711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.365905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.366214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.366239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.370532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.370608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.370633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.373644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.373773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.373801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.376844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.376919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.376943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.380560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.380681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.380706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.384314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.384396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.384420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.390592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.390780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.390805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.399245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.399385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.399410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.407998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.408302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.408328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.417936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.418254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.418280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.428310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.428613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.428639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.439230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.439455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.439481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.449558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.449655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.449680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.459695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.459969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.459995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.470104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.470383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.470409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.480509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.480842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.480867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.081 [2024-07-25 12:44:20.490675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.081 [2024-07-25 12:44:20.490979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.081 [2024-07-25 12:44:20.491005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.500251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.500561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.500587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.509894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.510159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.510186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.517098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.517212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.517236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.524390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.524668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.524694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.528901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.529037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.529062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.531973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.532085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.532109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.535036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.535166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.535190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.538135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.538277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.538302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.541218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.541323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.541348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.544359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.544460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.544485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.547407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.547533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.547566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.550442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.550582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.550607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.553458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.342 [2024-07-25 12:44:20.553603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.342 [2024-07-25 12:44:20.553628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.342 [2024-07-25 12:44:20.556485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.556597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.556622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.559541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.559693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.559717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.563257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.563462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.563487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.571730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.571999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.572025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.578378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.578649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.578675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.586035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.586152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.586177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.589325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.589439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.589464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.592503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.592613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.592638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.595633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.595743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.595768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.598720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.598842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.598866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.601741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.601863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.601888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.604788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.604943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.604967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.607853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.607985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.608010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.610915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.611032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.611057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.613956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.614065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.614090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.618507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.618644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.618674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.621682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.621783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.621808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.624694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.624814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.624838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.627713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.627843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.627867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.630726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.630844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.630869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.633730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.633862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.633887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.637042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.637179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.637203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.642730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.643028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.643054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.652154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.652413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.652439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.662446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.662724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.343 [2024-07-25 12:44:20.662750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.343 [2024-07-25 12:44:20.672840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.343 [2024-07-25 12:44:20.673125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.344 [2024-07-25 12:44:20.673151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.344 [2024-07-25 12:44:20.682393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fbdb40) with pdu=0x2000190fef90 00:31:47.344 [2024-07-25 12:44:20.682621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.344 [2024-07-25 12:44:20.682645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.344 00:31:47.344 Latency(us) 00:31:47.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.344 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:47.344 nvme0n1 : 2.01 4186.67 523.33 0.00 0.00 3812.14 1329.62 12048.54 00:31:47.344 =================================================================================================================== 00:31:47.344 Total : 4186.67 523.33 0.00 0.00 3812.14 1329.62 12048.54 00:31:47.344 0 00:31:47.344 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:47.344 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:47.344 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:47.344 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:47.344 | .driver_specific 00:31:47.344 | .nvme_error 00:31:47.344 | .status_code 00:31:47.344 | .command_transient_transport_error' 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 270 > 0 )) 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 606643 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 606643 ']' 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 606643 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 606643 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 606643' 00:31:47.604 killing process with pid 606643 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 606643 00:31:47.604 Received shutdown signal, test time was about 2.000000 seconds 00:31:47.604 00:31:47.604 Latency(us) 00:31:47.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.604 =================================================================================================================== 00:31:47.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:47.604 12:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 606643 00:31:47.864 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 604462 00:31:47.864 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 604462 ']' 00:31:47.864 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 604462 00:31:47.864 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:31:47.864 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:47.864 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 604462 00:31:47.864 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:47.864 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:47.864 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 604462' 00:31:47.864 killing process with pid 604462 00:31:47.864 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 604462 00:31:47.864 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 604462 00:31:48.125 00:31:48.125 real 0m17.023s 00:31:48.125 user 0m34.071s 00:31:48.125 sys 0m3.562s 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:48.125 ************************************ 00:31:48.125 END TEST nvmf_digest_error 00:31:48.125 ************************************ 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:48.125 rmmod nvme_tcp 00:31:48.125 rmmod nvme_fabrics 00:31:48.125 rmmod nvme_keyring 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 604462 ']' 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 604462 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 604462 ']' 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 604462 00:31:48.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (604462) - No such process 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 604462 is not found' 00:31:48.125 Process with pid 604462 is not found 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.125 12:44:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.671 12:44:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:50.671 00:31:50.671 real 0m45.709s 00:31:50.671 user 1m11.799s 00:31:50.671 sys 0m13.871s 00:31:50.671 12:44:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:50.671 12:44:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:50.671 ************************************ 00:31:50.672 END TEST nvmf_digest 00:31:50.672 ************************************ 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.672 ************************************ 00:31:50.672 START TEST nvmf_bdevperf 00:31:50.672 ************************************ 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:50.672 * Looking for test storage... 00:31:50.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:31:50.672 12:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:58.815 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:58.815 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:58.815 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:58.815 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:58.815 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:58.816 12:44:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:58.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:31:58.816 00:31:58.816 --- 10.0.0.2 ping statistics --- 00:31:58.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.816 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:31:58.816 00:31:58.816 --- 10.0.0.1 ping statistics --- 00:31:58.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.816 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=611770 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 611770 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 611770 ']' 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:58.816 12:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:58.816 [2024-07-25 12:44:32.202829] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:31:58.816 [2024-07-25 12:44:32.202894] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.076 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.076 [2024-07-25 12:44:32.295627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:59.076 [2024-07-25 12:44:32.403075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.076 [2024-07-25 12:44:32.403145] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.076 [2024-07-25 12:44:32.403156] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.076 [2024-07-25 12:44:32.403166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.076 [2024-07-25 12:44:32.403174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.076 [2024-07-25 12:44:32.403352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.076 [2024-07-25 12:44:32.403502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.076 [2024-07-25 12:44:32.403503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:00.020 [2024-07-25 12:44:33.136349] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:00.020 Malloc0 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:00.020 [2024-07-25 12:44:33.218951] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:00.020 { 00:32:00.020 "params": { 00:32:00.020 "name": "Nvme$subsystem", 00:32:00.020 "trtype": "$TEST_TRANSPORT", 00:32:00.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.020 "adrfam": "ipv4", 00:32:00.020 "trsvcid": "$NVMF_PORT", 00:32:00.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.020 "hdgst": ${hdgst:-false}, 00:32:00.020 "ddgst": ${ddgst:-false} 00:32:00.020 }, 00:32:00.020 "method": "bdev_nvme_attach_controller" 00:32:00.020 } 00:32:00.020 EOF 00:32:00.020 )") 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:00.020 12:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:00.020 "params": { 00:32:00.020 "name": "Nvme1", 00:32:00.020 "trtype": "tcp", 00:32:00.020 "traddr": "10.0.0.2", 00:32:00.020 "adrfam": "ipv4", 00:32:00.020 "trsvcid": "4420", 00:32:00.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:00.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:00.020 "hdgst": false, 00:32:00.020 "ddgst": false 00:32:00.020 }, 00:32:00.020 "method": "bdev_nvme_attach_controller" 00:32:00.020 }' 00:32:00.020 [2024-07-25 12:44:33.275079] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:32:00.020 [2024-07-25 12:44:33.275140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611820 ] 00:32:00.020 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.020 [2024-07-25 12:44:33.359714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.282 [2024-07-25 12:44:33.454978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.282 Running I/O for 1 seconds... 00:32:01.226 00:32:01.226 Latency(us) 00:32:01.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.226 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:01.226 Verification LBA range: start 0x0 length 0x4000 00:32:01.226 Nvme1n1 : 1.01 6739.28 26.33 0.00 0.00 18914.40 2192.94 16232.76 00:32:01.226 =================================================================================================================== 00:32:01.226 Total : 6739.28 26.33 0.00 0.00 18914.40 2192.94 16232.76 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=612111 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:01.487 { 00:32:01.487 "params": { 00:32:01.487 "name": "Nvme$subsystem", 00:32:01.487 "trtype": "$TEST_TRANSPORT", 00:32:01.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:01.487 "adrfam": "ipv4", 00:32:01.487 "trsvcid": "$NVMF_PORT", 00:32:01.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:01.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:01.487 "hdgst": ${hdgst:-false}, 00:32:01.487 "ddgst": ${ddgst:-false} 00:32:01.487 }, 00:32:01.487 "method": "bdev_nvme_attach_controller" 00:32:01.487 } 00:32:01.487 EOF 00:32:01.487 )") 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:01.487 12:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:01.487 "params": { 00:32:01.487 "name": "Nvme1", 00:32:01.487 "trtype": "tcp", 00:32:01.487 "traddr": "10.0.0.2", 00:32:01.487 "adrfam": "ipv4", 00:32:01.487 "trsvcid": "4420", 00:32:01.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:01.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:01.487 "hdgst": false, 00:32:01.487 "ddgst": false 00:32:01.487 }, 00:32:01.487 "method": "bdev_nvme_attach_controller" 00:32:01.487 }' 00:32:01.487 [2024-07-25 12:44:34.843518] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:32:01.487 [2024-07-25 12:44:34.843606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612111 ] 00:32:01.487 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.747 [2024-07-25 12:44:34.929684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.747 [2024-07-25 12:44:35.024414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.009 Running I/O for 15 seconds... 00:32:04.562 12:44:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 611770 00:32:04.562 12:44:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:04.562 [2024-07-25 12:44:37.814876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.814938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.814964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.814973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.814984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.814991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.562 [2024-07-25 12:44:37.815274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.562 [2024-07-25 12:44:37.815304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.562 [2024-07-25 12:44:37.815323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.562 [2024-07-25 12:44:37.815343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.562 [2024-07-25 12:44:37.815353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.563 [2024-07-25 12:44:37.815676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.563 [2024-07-25 12:44:37.815706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.563 [2024-07-25 12:44:37.815729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.563 [2024-07-25 12:44:37.815752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.563 [2024-07-25 12:44:37.815769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.563 [2024-07-25 12:44:37.815790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.563 [2024-07-25 12:44:37.815805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.563 [2024-07-25 12:44:37.815822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.815987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.815996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.816002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.816011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.816019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.816029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.816036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.816045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.816052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.816061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.816075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.816085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.816092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.816100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.563 [2024-07-25 12:44:37.816107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.563 [2024-07-25 12:44:37.816115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.564 [2024-07-25 12:44:37.816679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.564 [2024-07-25 12:44:37.816687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.816987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.816994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.565 [2024-07-25 12:44:37.817261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd98220 is same with the state(5) to be set 00:32:04.565 [2024-07-25 12:44:37.817279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.565 [2024-07-25 12:44:37.817287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.565 [2024-07-25 12:44:37.817293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3976 len:8 PRP1 0x0 PRP2 0x0 00:32:04.565 [2024-07-25 12:44:37.817305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.565 [2024-07-25 12:44:37.817364] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd98220 was disconnected and freed. reset controller. 00:32:04.565 [2024-07-25 12:44:37.820733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.566 [2024-07-25 12:44:37.820803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.566 [2024-07-25 12:44:37.821500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.566 [2024-07-25 12:44:37.821523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.566 [2024-07-25 12:44:37.821532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.566 [2024-07-25 12:44:37.821753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.566 [2024-07-25 12:44:37.821960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.566 [2024-07-25 12:44:37.821970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.566 [2024-07-25 12:44:37.821981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.566 [2024-07-25 12:44:37.825228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.566 [2024-07-25 12:44:37.834606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.566 [2024-07-25 12:44:37.835181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.566 [2024-07-25 12:44:37.835206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.566 [2024-07-25 12:44:37.835215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.566 [2024-07-25 12:44:37.835419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.566 [2024-07-25 12:44:37.835637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.566 [2024-07-25 12:44:37.835649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.566 [2024-07-25 12:44:37.835657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.566 [2024-07-25 12:44:37.838906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.566 [2024-07-25 12:44:37.848138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.566 [2024-07-25 12:44:37.848838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.566 [2024-07-25 12:44:37.848899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.566 [2024-07-25 12:44:37.848912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.566 [2024-07-25 12:44:37.849152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.566 [2024-07-25 12:44:37.849360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.566 [2024-07-25 12:44:37.849373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.566 [2024-07-25 12:44:37.849381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.566 [2024-07-25 12:44:37.852685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.566 [2024-07-25 12:44:37.861673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.566 [2024-07-25 12:44:37.862358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.566 [2024-07-25 12:44:37.862417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.566 [2024-07-25 12:44:37.862430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.566 [2024-07-25 12:44:37.862692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.566 [2024-07-25 12:44:37.862902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.566 [2024-07-25 12:44:37.862914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.566 [2024-07-25 12:44:37.862921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.566 [2024-07-25 12:44:37.866167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.566 [2024-07-25 12:44:37.875313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.566 [2024-07-25 12:44:37.875996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.566 [2024-07-25 12:44:37.876055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.566 [2024-07-25 12:44:37.876067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.566 [2024-07-25 12:44:37.876303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.566 [2024-07-25 12:44:37.876512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.566 [2024-07-25 12:44:37.876523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.566 [2024-07-25 12:44:37.876531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.566 [2024-07-25 12:44:37.879789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.566 [2024-07-25 12:44:37.888938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.566 [2024-07-25 12:44:37.889536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.566 [2024-07-25 12:44:37.889570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.566 [2024-07-25 12:44:37.889579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.566 [2024-07-25 12:44:37.889783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.566 [2024-07-25 12:44:37.889986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.566 [2024-07-25 12:44:37.889998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.566 [2024-07-25 12:44:37.890006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.566 [2024-07-25 12:44:37.893239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.566 [2024-07-25 12:44:37.902580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.566 [2024-07-25 12:44:37.903234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.566 [2024-07-25 12:44:37.903294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.566 [2024-07-25 12:44:37.903306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.566 [2024-07-25 12:44:37.903543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.566 [2024-07-25 12:44:37.903766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.566 [2024-07-25 12:44:37.903777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.566 [2024-07-25 12:44:37.903792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.566 [2024-07-25 12:44:37.907041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.566 [2024-07-25 12:44:37.916206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.566 [2024-07-25 12:44:37.916888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.566 [2024-07-25 12:44:37.916948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.566 [2024-07-25 12:44:37.916960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.566 [2024-07-25 12:44:37.917196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.566 [2024-07-25 12:44:37.917403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.566 [2024-07-25 12:44:37.917414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.566 [2024-07-25 12:44:37.917423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.566 [2024-07-25 12:44:37.920689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.566 [2024-07-25 12:44:37.929829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.566 [2024-07-25 12:44:37.930502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.566 [2024-07-25 12:44:37.930574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.566 [2024-07-25 12:44:37.930587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.566 [2024-07-25 12:44:37.930824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.566 [2024-07-25 12:44:37.931032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.566 [2024-07-25 12:44:37.931043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.566 [2024-07-25 12:44:37.931051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.566 [2024-07-25 12:44:37.934303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.566 [2024-07-25 12:44:37.943449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.566 [2024-07-25 12:44:37.944127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.567 [2024-07-25 12:44:37.944188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.567 [2024-07-25 12:44:37.944200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.567 [2024-07-25 12:44:37.944437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.567 [2024-07-25 12:44:37.944659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.567 [2024-07-25 12:44:37.944672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.567 [2024-07-25 12:44:37.944680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.567 [2024-07-25 12:44:37.947927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.567 [2024-07-25 12:44:37.957088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.567 [2024-07-25 12:44:37.957795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.567 [2024-07-25 12:44:37.957857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.567 [2024-07-25 12:44:37.957869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.567 [2024-07-25 12:44:37.958102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.567 [2024-07-25 12:44:37.958308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.567 [2024-07-25 12:44:37.958320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.567 [2024-07-25 12:44:37.958328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.567 [2024-07-25 12:44:37.961574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.567 [2024-07-25 12:44:37.970714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.567 [2024-07-25 12:44:37.971252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.567 [2024-07-25 12:44:37.971277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.567 [2024-07-25 12:44:37.971286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.567 [2024-07-25 12:44:37.971488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.567 [2024-07-25 12:44:37.971699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.567 [2024-07-25 12:44:37.971712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.567 [2024-07-25 12:44:37.971719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.567 [2024-07-25 12:44:37.974958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.829 [2024-07-25 12:44:37.984278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.829 [2024-07-25 12:44:37.984890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.829 [2024-07-25 12:44:37.984940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.829 [2024-07-25 12:44:37.984952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.829 [2024-07-25 12:44:37.985180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.829 [2024-07-25 12:44:37.985386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.829 [2024-07-25 12:44:37.985396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.829 [2024-07-25 12:44:37.985404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.829 [2024-07-25 12:44:37.988657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.829 [2024-07-25 12:44:37.997787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.829 [2024-07-25 12:44:37.998422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.829 [2024-07-25 12:44:37.998467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.829 [2024-07-25 12:44:37.998478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:37.998715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:37.998926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:37.998938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:37.998946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.002190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.011328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.011944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.011988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.011999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.012222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.012427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.012439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.012446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.015691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.024822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.025437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.025478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.025488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.025721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.025926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.025936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.025943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.029178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.038301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.038933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.038972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.038982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.039203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.039407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.039417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.039425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.042668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.051799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.052450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.052488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.052499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.052728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.052933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.052942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.052949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.056177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.065334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.065950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.065987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.065999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.066219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.066424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.066433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.066440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.069682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.078802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.079443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.079479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.079489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.079715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.079919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.079929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.079936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.083161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.092282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.092801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.092819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.092832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.093032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.093232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.093241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.093248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.096469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.105777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.106366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.106402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.106412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.106638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.106842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.106851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.106858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.110094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.119404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.120035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.120071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.120081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.120300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.120503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.120512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.120519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.123753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.132869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.133510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.133545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.133564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.133782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.133985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.134001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.134009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.137235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.146351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.146943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.146978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.146988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.147206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.147410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.147419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.147426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.150659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.159808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.160445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.160480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.160491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.160717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.160921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.160930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.160937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.164163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.173281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.173900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.173935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.173945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.174163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.174366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.174376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.174383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.177617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.186741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.187243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.187278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.187290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.187509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.187721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.187731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.187738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.190963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.200274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.200909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.200944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.200954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.201172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.201375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.201385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.201392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.204629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.213754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.214415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.214450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.214460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.214687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.214891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.214900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.214907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.218134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.227260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.227903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.227938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.227948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.228170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.228374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.228383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.830 [2024-07-25 12:44:38.228390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.830 [2024-07-25 12:44:38.231624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.830 [2024-07-25 12:44:38.240742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.830 [2024-07-25 12:44:38.241379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.830 [2024-07-25 12:44:38.241414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:04.830 [2024-07-25 12:44:38.241424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:04.830 [2024-07-25 12:44:38.241651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:04.830 [2024-07-25 12:44:38.241855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.830 [2024-07-25 12:44:38.241864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.831 [2024-07-25 12:44:38.241871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.831 [2024-07-25 12:44:38.245096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.092 [2024-07-25 12:44:38.254222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.092 [2024-07-25 12:44:38.254746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.092 [2024-07-25 12:44:38.254781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.093 [2024-07-25 12:44:38.254793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.093 [2024-07-25 12:44:38.255014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.093 [2024-07-25 12:44:38.255217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.093 [2024-07-25 12:44:38.255226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.093 [2024-07-25 12:44:38.255233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.093 [2024-07-25 12:44:38.258465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.093 [2024-07-25 12:44:38.267806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.093 [2024-07-25 12:44:38.268346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.093 [2024-07-25 12:44:38.268364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.093 [2024-07-25 12:44:38.268372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.093 [2024-07-25 12:44:38.268578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.093 [2024-07-25 12:44:38.268779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.093 [2024-07-25 12:44:38.268788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.093 [2024-07-25 12:44:38.268799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.093 [2024-07-25 12:44:38.272020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.093 [2024-07-25 12:44:38.281405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.093 [2024-07-25 12:44:38.281993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.093 [2024-07-25 12:44:38.282029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.093 [2024-07-25 12:44:38.282039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.093 [2024-07-25 12:44:38.282257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.093 [2024-07-25 12:44:38.282460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.093 [2024-07-25 12:44:38.282469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.093 [2024-07-25 12:44:38.282476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.093 [2024-07-25 12:44:38.285710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.093 [2024-07-25 12:44:38.295023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.093 [2024-07-25 12:44:38.295652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.093 [2024-07-25 12:44:38.295688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.093 [2024-07-25 12:44:38.295699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.093 [2024-07-25 12:44:38.295918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.093 [2024-07-25 12:44:38.296121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.093 [2024-07-25 12:44:38.296130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.093 [2024-07-25 12:44:38.296137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.093 [2024-07-25 12:44:38.299369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.093 [2024-07-25 12:44:38.308486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.093 [2024-07-25 12:44:38.309118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.093 [2024-07-25 12:44:38.309153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.093 [2024-07-25 12:44:38.309163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.093 [2024-07-25 12:44:38.309381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.093 [2024-07-25 12:44:38.309591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.093 [2024-07-25 12:44:38.309601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.093 [2024-07-25 12:44:38.309608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.093 [2024-07-25 12:44:38.312840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.093 [2024-07-25 12:44:38.321956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.093 [2024-07-25 12:44:38.322608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.093 [2024-07-25 12:44:38.322647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.093 [2024-07-25 12:44:38.322658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.093 [2024-07-25 12:44:38.322879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.093 [2024-07-25 12:44:38.323082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.093 [2024-07-25 12:44:38.323091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.093 [2024-07-25 12:44:38.323098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.093 [2024-07-25 12:44:38.326330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.093 [2024-07-25 12:44:38.335454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.093 [2024-07-25 12:44:38.336067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.093 [2024-07-25 12:44:38.336102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.093 [2024-07-25 12:44:38.336113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.093 [2024-07-25 12:44:38.336331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.093 [2024-07-25 12:44:38.336534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.093 [2024-07-25 12:44:38.336543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.093 [2024-07-25 12:44:38.336559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.093 [2024-07-25 12:44:38.339787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.093 [2024-07-25 12:44:38.348915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.093 [2024-07-25 12:44:38.349435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.093 [2024-07-25 12:44:38.349454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.093 [2024-07-25 12:44:38.349462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.093 [2024-07-25 12:44:38.349670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.093 [2024-07-25 12:44:38.349870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.093 [2024-07-25 12:44:38.349878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.093 [2024-07-25 12:44:38.349885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.093 [2024-07-25 12:44:38.353118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.093 [2024-07-25 12:44:38.362440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.093 [2024-07-25 12:44:38.363054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.093 [2024-07-25 12:44:38.363089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.093 [2024-07-25 12:44:38.363099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.093 [2024-07-25 12:44:38.363318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.093 [2024-07-25 12:44:38.363525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.093 [2024-07-25 12:44:38.363535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.093 [2024-07-25 12:44:38.363543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.093 [2024-07-25 12:44:38.366781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.093 [2024-07-25 12:44:38.375915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.093 [2024-07-25 12:44:38.376418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.093 [2024-07-25 12:44:38.376436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.093 [2024-07-25 12:44:38.376444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.093 [2024-07-25 12:44:38.376651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.093 [2024-07-25 12:44:38.376852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.093 [2024-07-25 12:44:38.376861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.093 [2024-07-25 12:44:38.376868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.093 [2024-07-25 12:44:38.380091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.094 [2024-07-25 12:44:38.389401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.094 [2024-07-25 12:44:38.389929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.094 [2024-07-25 12:44:38.389944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.094 [2024-07-25 12:44:38.389952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.094 [2024-07-25 12:44:38.390151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.094 [2024-07-25 12:44:38.390351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.094 [2024-07-25 12:44:38.390360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.094 [2024-07-25 12:44:38.390367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.094 [2024-07-25 12:44:38.393594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.094 [2024-07-25 12:44:38.402905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.094 [2024-07-25 12:44:38.403428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.094 [2024-07-25 12:44:38.403442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.094 [2024-07-25 12:44:38.403450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.094 [2024-07-25 12:44:38.403654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.094 [2024-07-25 12:44:38.403854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.094 [2024-07-25 12:44:38.403864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.094 [2024-07-25 12:44:38.403871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.094 [2024-07-25 12:44:38.407104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.094 [2024-07-25 12:44:38.416425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.094 [2024-07-25 12:44:38.416967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.094 [2024-07-25 12:44:38.416983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.094 [2024-07-25 12:44:38.416989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.094 [2024-07-25 12:44:38.417189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.094 [2024-07-25 12:44:38.417388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.094 [2024-07-25 12:44:38.417398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.094 [2024-07-25 12:44:38.417404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.094 [2024-07-25 12:44:38.420631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.094 [2024-07-25 12:44:38.429958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.094 [2024-07-25 12:44:38.430605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.094 [2024-07-25 12:44:38.430641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.094 [2024-07-25 12:44:38.430653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.094 [2024-07-25 12:44:38.430872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.094 [2024-07-25 12:44:38.431076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.094 [2024-07-25 12:44:38.431085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.094 [2024-07-25 12:44:38.431092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.094 [2024-07-25 12:44:38.434324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.094 [2024-07-25 12:44:38.443455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.094 [2024-07-25 12:44:38.444083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.094 [2024-07-25 12:44:38.444118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.094 [2024-07-25 12:44:38.444128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.094 [2024-07-25 12:44:38.444346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.094 [2024-07-25 12:44:38.444558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.094 [2024-07-25 12:44:38.444569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.094 [2024-07-25 12:44:38.444576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.094 [2024-07-25 12:44:38.447804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.094 [2024-07-25 12:44:38.456929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.094 [2024-07-25 12:44:38.457473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.094 [2024-07-25 12:44:38.457491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.094 [2024-07-25 12:44:38.457502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.094 [2024-07-25 12:44:38.457709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.094 [2024-07-25 12:44:38.457910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.094 [2024-07-25 12:44:38.457919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.094 [2024-07-25 12:44:38.457925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.094 [2024-07-25 12:44:38.461145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.094 [2024-07-25 12:44:38.470444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.094 [2024-07-25 12:44:38.470987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.094 [2024-07-25 12:44:38.471003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.094 [2024-07-25 12:44:38.471011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.094 [2024-07-25 12:44:38.471210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.094 [2024-07-25 12:44:38.471409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.094 [2024-07-25 12:44:38.471419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.094 [2024-07-25 12:44:38.471425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.094 [2024-07-25 12:44:38.474677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.094 [2024-07-25 12:44:38.483982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.094 [2024-07-25 12:44:38.484512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.094 [2024-07-25 12:44:38.484528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.094 [2024-07-25 12:44:38.484535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.094 [2024-07-25 12:44:38.484740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.094 [2024-07-25 12:44:38.484940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.094 [2024-07-25 12:44:38.484949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.094 [2024-07-25 12:44:38.484955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.094 [2024-07-25 12:44:38.488173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.094 [2024-07-25 12:44:38.497478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.094 [2024-07-25 12:44:38.498006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.094 [2024-07-25 12:44:38.498022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.094 [2024-07-25 12:44:38.498028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.094 [2024-07-25 12:44:38.498228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.094 [2024-07-25 12:44:38.498428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.094 [2024-07-25 12:44:38.498441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.094 [2024-07-25 12:44:38.498447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.094 [2024-07-25 12:44:38.501675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.094 [2024-07-25 12:44:38.510992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.357 [2024-07-25 12:44:38.511518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.357 [2024-07-25 12:44:38.511534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.357 [2024-07-25 12:44:38.511541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.357 [2024-07-25 12:44:38.511747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.357 [2024-07-25 12:44:38.511948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.357 [2024-07-25 12:44:38.511956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.357 [2024-07-25 12:44:38.511963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.357 [2024-07-25 12:44:38.515183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.357 [2024-07-25 12:44:38.524489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.357 [2024-07-25 12:44:38.525022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.357 [2024-07-25 12:44:38.525038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.357 [2024-07-25 12:44:38.525045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.357 [2024-07-25 12:44:38.525245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.357 [2024-07-25 12:44:38.525444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.357 [2024-07-25 12:44:38.525454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.357 [2024-07-25 12:44:38.525460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.357 [2024-07-25 12:44:38.528688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.357 [2024-07-25 12:44:38.537990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.357 [2024-07-25 12:44:38.538481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.357 [2024-07-25 12:44:38.538496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.357 [2024-07-25 12:44:38.538503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.357 [2024-07-25 12:44:38.538707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.357 [2024-07-25 12:44:38.538907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.357 [2024-07-25 12:44:38.538916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.357 [2024-07-25 12:44:38.538923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.357 [2024-07-25 12:44:38.542141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.357 [2024-07-25 12:44:38.551447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.357 [2024-07-25 12:44:38.551962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.357 [2024-07-25 12:44:38.551978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.357 [2024-07-25 12:44:38.551985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.357 [2024-07-25 12:44:38.552183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.357 [2024-07-25 12:44:38.552383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.357 [2024-07-25 12:44:38.552392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.357 [2024-07-25 12:44:38.552399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.357 [2024-07-25 12:44:38.555621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.357 [2024-07-25 12:44:38.564924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.357 [2024-07-25 12:44:38.565422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.357 [2024-07-25 12:44:38.565437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.357 [2024-07-25 12:44:38.565444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.357 [2024-07-25 12:44:38.565648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.357 [2024-07-25 12:44:38.565848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.357 [2024-07-25 12:44:38.565857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.357 [2024-07-25 12:44:38.565864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.357 [2024-07-25 12:44:38.569084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.357 [2024-07-25 12:44:38.578395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.357 [2024-07-25 12:44:38.578787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.357 [2024-07-25 12:44:38.578802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.357 [2024-07-25 12:44:38.578809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.357 [2024-07-25 12:44:38.579008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.357 [2024-07-25 12:44:38.579207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.357 [2024-07-25 12:44:38.579215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.357 [2024-07-25 12:44:38.579222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.357 [2024-07-25 12:44:38.582450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.357 [2024-07-25 12:44:38.591959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.357 [2024-07-25 12:44:38.592574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.357 [2024-07-25 12:44:38.592609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.357 [2024-07-25 12:44:38.592619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.357 [2024-07-25 12:44:38.592842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.357 [2024-07-25 12:44:38.593045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.357 [2024-07-25 12:44:38.593055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.357 [2024-07-25 12:44:38.593062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.357 [2024-07-25 12:44:38.596291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.357 [2024-07-25 12:44:38.605414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.606043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.606079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.606089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.606307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.606511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.606519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.606526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.609762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.618890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.619572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.619607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.619618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.619836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.620039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.620049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.620056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.623287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.632406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.633035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.633070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.633081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.633300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.633503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.633513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.633524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.636761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.645879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.646374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.646392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.646399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.646607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.646808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.646817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.646824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.650043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.659349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.659867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.659884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.659891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.660090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.660290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.660299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.660306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.663526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.672826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.673232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.673250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.673257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.673458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.673664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.673673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.673680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.676899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.686418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.686905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.686922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.686929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.687129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.687329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.687338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.687345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.690569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.699868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.700392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.700407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.700414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.700620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.700821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.700830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.700837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.704055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.713358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.713901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.713917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.713924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.714124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.714323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.714332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.714339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.717560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.726858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.727387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.727402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.727409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.727615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.727819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.727828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.727835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.731052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.740357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.740864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.740879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.740886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.741085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.741285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.741294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.741300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.744520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.753847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.754335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.754350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.754357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.754562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.754762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.754771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.754780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.758003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.358 [2024-07-25 12:44:38.767324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.358 [2024-07-25 12:44:38.767821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.358 [2024-07-25 12:44:38.767836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.358 [2024-07-25 12:44:38.767843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.358 [2024-07-25 12:44:38.768042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.358 [2024-07-25 12:44:38.768242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.358 [2024-07-25 12:44:38.768251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.358 [2024-07-25 12:44:38.768257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.358 [2024-07-25 12:44:38.771488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.620 [2024-07-25 12:44:38.780815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.621 [2024-07-25 12:44:38.781343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.621 [2024-07-25 12:44:38.781358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.621 [2024-07-25 12:44:38.781365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.621 [2024-07-25 12:44:38.781569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.621 [2024-07-25 12:44:38.781770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.621 [2024-07-25 12:44:38.781779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.621 [2024-07-25 12:44:38.781786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.621 [2024-07-25 12:44:38.785008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.621 [2024-07-25 12:44:38.794324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.621 [2024-07-25 12:44:38.794854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.621 [2024-07-25 12:44:38.794870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.621 [2024-07-25 12:44:38.794877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.621 [2024-07-25 12:44:38.795076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.621 [2024-07-25 12:44:38.795276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.621 [2024-07-25 12:44:38.795285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.621 [2024-07-25 12:44:38.795292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.621 [2024-07-25 12:44:38.798515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.621 [2024-07-25 12:44:38.807823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.621 [2024-07-25 12:44:38.808346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.621 [2024-07-25 12:44:38.808361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.621 [2024-07-25 12:44:38.808368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.621 [2024-07-25 12:44:38.808573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.621 [2024-07-25 12:44:38.808773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.621 [2024-07-25 12:44:38.808782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.621 [2024-07-25 12:44:38.808789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.621 [2024-07-25 12:44:38.812013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.621 [2024-07-25 12:44:38.821312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.621 [2024-07-25 12:44:38.821828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.621 [2024-07-25 12:44:38.821844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.621 [2024-07-25 12:44:38.821858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.621 [2024-07-25 12:44:38.822058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.621 [2024-07-25 12:44:38.822258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.621 [2024-07-25 12:44:38.822267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.621 [2024-07-25 12:44:38.822273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.621 [2024-07-25 12:44:38.825495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.621 [2024-07-25 12:44:38.834809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.621 [2024-07-25 12:44:38.835300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.621 [2024-07-25 12:44:38.835315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.621 [2024-07-25 12:44:38.835322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.621 [2024-07-25 12:44:38.835522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.621 [2024-07-25 12:44:38.835728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.621 [2024-07-25 12:44:38.835737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.621 [2024-07-25 12:44:38.835744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.621 [2024-07-25 12:44:38.838966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.621 [2024-07-25 12:44:38.848359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.621 [2024-07-25 12:44:38.848876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.621 [2024-07-25 12:44:38.848893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.621 [2024-07-25 12:44:38.848901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.621 [2024-07-25 12:44:38.849100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.621 [2024-07-25 12:44:38.849300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.621 [2024-07-25 12:44:38.849309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.621 [2024-07-25 12:44:38.849316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.621 [2024-07-25 12:44:38.852543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.621 [2024-07-25 12:44:38.861861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.621 [2024-07-25 12:44:38.862353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.621 [2024-07-25 12:44:38.862368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.621 [2024-07-25 12:44:38.862375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.621 [2024-07-25 12:44:38.862581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.621 [2024-07-25 12:44:38.862780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.621 [2024-07-25 12:44:38.862793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.621 [2024-07-25 12:44:38.862800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.621 [2024-07-25 12:44:38.866027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.621 [2024-07-25 12:44:38.875474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.621 [2024-07-25 12:44:38.876028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.621 [2024-07-25 12:44:38.876045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.621 [2024-07-25 12:44:38.876053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.621 [2024-07-25 12:44:38.876253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.621 [2024-07-25 12:44:38.876453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.621 [2024-07-25 12:44:38.876461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.621 [2024-07-25 12:44:38.876468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.621 [2024-07-25 12:44:38.879699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.621 [2024-07-25 12:44:38.889049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.621 [2024-07-25 12:44:38.889557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.621 [2024-07-25 12:44:38.889573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.621 [2024-07-25 12:44:38.889580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.621 [2024-07-25 12:44:38.889780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.621 [2024-07-25 12:44:38.889981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.621 [2024-07-25 12:44:38.889990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.621 [2024-07-25 12:44:38.889997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.621 [2024-07-25 12:44:38.893217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.621 [2024-07-25 12:44:38.902538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.621 [2024-07-25 12:44:38.903048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.621 [2024-07-25 12:44:38.903064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.621 [2024-07-25 12:44:38.903071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.621 [2024-07-25 12:44:38.903270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.621 [2024-07-25 12:44:38.903470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.622 [2024-07-25 12:44:38.903479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.622 [2024-07-25 12:44:38.903486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.622 [2024-07-25 12:44:38.906716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.622 [2024-07-25 12:44:38.916049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.622 [2024-07-25 12:44:38.916540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.622 [2024-07-25 12:44:38.916562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.622 [2024-07-25 12:44:38.916570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.622 [2024-07-25 12:44:38.916769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.622 [2024-07-25 12:44:38.916968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.622 [2024-07-25 12:44:38.916977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.622 [2024-07-25 12:44:38.916984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.622 [2024-07-25 12:44:38.920208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.622 [2024-07-25 12:44:38.929532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.622 [2024-07-25 12:44:38.930035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.622 [2024-07-25 12:44:38.930051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.622 [2024-07-25 12:44:38.930058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.622 [2024-07-25 12:44:38.930257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.622 [2024-07-25 12:44:38.930456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.622 [2024-07-25 12:44:38.930465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.622 [2024-07-25 12:44:38.930472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.622 [2024-07-25 12:44:38.933698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.622 [2024-07-25 12:44:38.943017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.622 [2024-07-25 12:44:38.943501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.622 [2024-07-25 12:44:38.943517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.622 [2024-07-25 12:44:38.943524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.622 [2024-07-25 12:44:38.943732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.622 [2024-07-25 12:44:38.943933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.622 [2024-07-25 12:44:38.943942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.622 [2024-07-25 12:44:38.943949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.622 [2024-07-25 12:44:38.947170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.622 [2024-07-25 12:44:38.956499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.622 [2024-07-25 12:44:38.957031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.622 [2024-07-25 12:44:38.957047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.622 [2024-07-25 12:44:38.957054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.622 [2024-07-25 12:44:38.957257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.622 [2024-07-25 12:44:38.957456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.622 [2024-07-25 12:44:38.957464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.622 [2024-07-25 12:44:38.957471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.622 [2024-07-25 12:44:38.960697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.622 [2024-07-25 12:44:38.970006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.622 [2024-07-25 12:44:38.970492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.622 [2024-07-25 12:44:38.970508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.622 [2024-07-25 12:44:38.970515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.622 [2024-07-25 12:44:38.970721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.622 [2024-07-25 12:44:38.970922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.622 [2024-07-25 12:44:38.970930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.622 [2024-07-25 12:44:38.970936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.622 [2024-07-25 12:44:38.974157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.622 [2024-07-25 12:44:38.983461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.622 [2024-07-25 12:44:38.983958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.622 [2024-07-25 12:44:38.983974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.622 [2024-07-25 12:44:38.983981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.622 [2024-07-25 12:44:38.984180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.622 [2024-07-25 12:44:38.984379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.622 [2024-07-25 12:44:38.984388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.622 [2024-07-25 12:44:38.984395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.622 [2024-07-25 12:44:38.987621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.622 [2024-07-25 12:44:38.996927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.622 [2024-07-25 12:44:38.997414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.622 [2024-07-25 12:44:38.997428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.622 [2024-07-25 12:44:38.997436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.622 [2024-07-25 12:44:38.997641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.622 [2024-07-25 12:44:38.997841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.622 [2024-07-25 12:44:38.997850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.622 [2024-07-25 12:44:38.997860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.622 [2024-07-25 12:44:39.001086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.622 [2024-07-25 12:44:39.010395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.622 [2024-07-25 12:44:39.010966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.622 [2024-07-25 12:44:39.010981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.622 [2024-07-25 12:44:39.010988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.622 [2024-07-25 12:44:39.011187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.622 [2024-07-25 12:44:39.011386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.622 [2024-07-25 12:44:39.011396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.622 [2024-07-25 12:44:39.011402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.622 [2024-07-25 12:44:39.014639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.622 [2024-07-25 12:44:39.023948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.622 [2024-07-25 12:44:39.024466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.622 [2024-07-25 12:44:39.024481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.622 [2024-07-25 12:44:39.024488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.622 [2024-07-25 12:44:39.024693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.622 [2024-07-25 12:44:39.024894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.622 [2024-07-25 12:44:39.024903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.622 [2024-07-25 12:44:39.024909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.622 [2024-07-25 12:44:39.028128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.622 [2024-07-25 12:44:39.037438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.622 [2024-07-25 12:44:39.037982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.622 [2024-07-25 12:44:39.037997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.622 [2024-07-25 12:44:39.038004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.623 [2024-07-25 12:44:39.038203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.623 [2024-07-25 12:44:39.038402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.623 [2024-07-25 12:44:39.038412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.623 [2024-07-25 12:44:39.038418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.884 [2024-07-25 12:44:39.041651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.884 [2024-07-25 12:44:39.050960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.884 [2024-07-25 12:44:39.051485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.884 [2024-07-25 12:44:39.051499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.884 [2024-07-25 12:44:39.051506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.884 [2024-07-25 12:44:39.051718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.884 [2024-07-25 12:44:39.051918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.884 [2024-07-25 12:44:39.051927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.884 [2024-07-25 12:44:39.051933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.884 [2024-07-25 12:44:39.055153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.884 [2024-07-25 12:44:39.064467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.884 [2024-07-25 12:44:39.064991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.884 [2024-07-25 12:44:39.065006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.884 [2024-07-25 12:44:39.065013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.884 [2024-07-25 12:44:39.065213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.884 [2024-07-25 12:44:39.065412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.884 [2024-07-25 12:44:39.065421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.884 [2024-07-25 12:44:39.065428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.884 [2024-07-25 12:44:39.068654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.884 [2024-07-25 12:44:39.077964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.884 [2024-07-25 12:44:39.078482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.884 [2024-07-25 12:44:39.078497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.884 [2024-07-25 12:44:39.078504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.884 [2024-07-25 12:44:39.078709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.884 [2024-07-25 12:44:39.078909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.884 [2024-07-25 12:44:39.078918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.884 [2024-07-25 12:44:39.078925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.884 [2024-07-25 12:44:39.082144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.884 [2024-07-25 12:44:39.091456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.884 [2024-07-25 12:44:39.091962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.884 [2024-07-25 12:44:39.091977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.884 [2024-07-25 12:44:39.091984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.884 [2024-07-25 12:44:39.092184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.884 [2024-07-25 12:44:39.092387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.884 [2024-07-25 12:44:39.092396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.884 [2024-07-25 12:44:39.092402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.884 [2024-07-25 12:44:39.095655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.884 [2024-07-25 12:44:39.104983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.884 [2024-07-25 12:44:39.105506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.884 [2024-07-25 12:44:39.105521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.884 [2024-07-25 12:44:39.105528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.884 [2024-07-25 12:44:39.105733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.884 [2024-07-25 12:44:39.105933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.884 [2024-07-25 12:44:39.105943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.884 [2024-07-25 12:44:39.105949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.884 [2024-07-25 12:44:39.109168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.884 [2024-07-25 12:44:39.118484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.884 [2024-07-25 12:44:39.119012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.884 [2024-07-25 12:44:39.119027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.884 [2024-07-25 12:44:39.119035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.884 [2024-07-25 12:44:39.119234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.884 [2024-07-25 12:44:39.119433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.884 [2024-07-25 12:44:39.119442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.884 [2024-07-25 12:44:39.119449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.884 [2024-07-25 12:44:39.122674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.884 [2024-07-25 12:44:39.131980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.884 [2024-07-25 12:44:39.132499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.884 [2024-07-25 12:44:39.132515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.884 [2024-07-25 12:44:39.132522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.884 [2024-07-25 12:44:39.132728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.884 [2024-07-25 12:44:39.132928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.884 [2024-07-25 12:44:39.132936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.884 [2024-07-25 12:44:39.132944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.884 [2024-07-25 12:44:39.136176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.885 [2024-07-25 12:44:39.145505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.885 [2024-07-25 12:44:39.146034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.885 [2024-07-25 12:44:39.146049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.885 [2024-07-25 12:44:39.146056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.885 [2024-07-25 12:44:39.146255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.885 [2024-07-25 12:44:39.146455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.885 [2024-07-25 12:44:39.146464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.885 [2024-07-25 12:44:39.146470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.885 [2024-07-25 12:44:39.149702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.885 [2024-07-25 12:44:39.159037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.885 [2024-07-25 12:44:39.159579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.885 [2024-07-25 12:44:39.159596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.885 [2024-07-25 12:44:39.159603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.885 [2024-07-25 12:44:39.159803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.885 [2024-07-25 12:44:39.160003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.885 [2024-07-25 12:44:39.160011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.885 [2024-07-25 12:44:39.160017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.885 [2024-07-25 12:44:39.163239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.885 [2024-07-25 12:44:39.172577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.885 [2024-07-25 12:44:39.173086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.885 [2024-07-25 12:44:39.173101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.885 [2024-07-25 12:44:39.173108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.885 [2024-07-25 12:44:39.173307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.885 [2024-07-25 12:44:39.173506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.885 [2024-07-25 12:44:39.173514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.885 [2024-07-25 12:44:39.173521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.885 [2024-07-25 12:44:39.176753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.885 [2024-07-25 12:44:39.186071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.885 [2024-07-25 12:44:39.186649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.885 [2024-07-25 12:44:39.186685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.885 [2024-07-25 12:44:39.186701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.885 [2024-07-25 12:44:39.186920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.885 [2024-07-25 12:44:39.187124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.885 [2024-07-25 12:44:39.187132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.885 [2024-07-25 12:44:39.187139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.885 [2024-07-25 12:44:39.190373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.885 [2024-07-25 12:44:39.199692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.885 [2024-07-25 12:44:39.200333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.885 [2024-07-25 12:44:39.200368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.885 [2024-07-25 12:44:39.200378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.885 [2024-07-25 12:44:39.200605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.885 [2024-07-25 12:44:39.200809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.885 [2024-07-25 12:44:39.200819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.885 [2024-07-25 12:44:39.200826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.885 [2024-07-25 12:44:39.204053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.885 [2024-07-25 12:44:39.213182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.885 [2024-07-25 12:44:39.213681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.885 [2024-07-25 12:44:39.213700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.885 [2024-07-25 12:44:39.213707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.885 [2024-07-25 12:44:39.213907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.885 [2024-07-25 12:44:39.214107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.885 [2024-07-25 12:44:39.214117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.885 [2024-07-25 12:44:39.214123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.885 [2024-07-25 12:44:39.217346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.885 [2024-07-25 12:44:39.226655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.885 [2024-07-25 12:44:39.227145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.885 [2024-07-25 12:44:39.227180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.885 [2024-07-25 12:44:39.227190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.885 [2024-07-25 12:44:39.227408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.885 [2024-07-25 12:44:39.227618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.885 [2024-07-25 12:44:39.227633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.885 [2024-07-25 12:44:39.227641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.885 [2024-07-25 12:44:39.230867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.885 [2024-07-25 12:44:39.240206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.885 [2024-07-25 12:44:39.240792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.885 [2024-07-25 12:44:39.240811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.885 [2024-07-25 12:44:39.240819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.885 [2024-07-25 12:44:39.241019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.885 [2024-07-25 12:44:39.241220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.885 [2024-07-25 12:44:39.241229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.885 [2024-07-25 12:44:39.241236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.885 [2024-07-25 12:44:39.244454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.885 [2024-07-25 12:44:39.253771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.885 [2024-07-25 12:44:39.254177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.885 [2024-07-25 12:44:39.254193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.885 [2024-07-25 12:44:39.254200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.885 [2024-07-25 12:44:39.254399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.885 [2024-07-25 12:44:39.254604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.885 [2024-07-25 12:44:39.254613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.885 [2024-07-25 12:44:39.254620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.885 [2024-07-25 12:44:39.257842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.885 [2024-07-25 12:44:39.267337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.886 [2024-07-25 12:44:39.267929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.886 [2024-07-25 12:44:39.267965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.886 [2024-07-25 12:44:39.267975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.886 [2024-07-25 12:44:39.268193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.886 [2024-07-25 12:44:39.268397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.886 [2024-07-25 12:44:39.268407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.886 [2024-07-25 12:44:39.268414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.886 [2024-07-25 12:44:39.271646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.886 [2024-07-25 12:44:39.280965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.886 [2024-07-25 12:44:39.281592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.886 [2024-07-25 12:44:39.281627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.886 [2024-07-25 12:44:39.281639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.886 [2024-07-25 12:44:39.281859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.886 [2024-07-25 12:44:39.282062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.886 [2024-07-25 12:44:39.282071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.886 [2024-07-25 12:44:39.282078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.886 [2024-07-25 12:44:39.285311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:05.886 [2024-07-25 12:44:39.294434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.886 [2024-07-25 12:44:39.294965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.886 [2024-07-25 12:44:39.294984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:05.886 [2024-07-25 12:44:39.294992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:05.886 [2024-07-25 12:44:39.295192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:05.886 [2024-07-25 12:44:39.295392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:05.886 [2024-07-25 12:44:39.295401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:05.886 [2024-07-25 12:44:39.295408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.886 [2024-07-25 12:44:39.298633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.147 [2024-07-25 12:44:39.307969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.147 [2024-07-25 12:44:39.308598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.148 [2024-07-25 12:44:39.308633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.148 [2024-07-25 12:44:39.308646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.148 [2024-07-25 12:44:39.308865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.148 [2024-07-25 12:44:39.309068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.148 [2024-07-25 12:44:39.309078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.148 [2024-07-25 12:44:39.309085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.148 [2024-07-25 12:44:39.312316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.148 [2024-07-25 12:44:39.321449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.148 [2024-07-25 12:44:39.321989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.148 [2024-07-25 12:44:39.322025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.148 [2024-07-25 12:44:39.322037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.148 [2024-07-25 12:44:39.322262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.148 [2024-07-25 12:44:39.322466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.148 [2024-07-25 12:44:39.322475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.148 [2024-07-25 12:44:39.322482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.148 [2024-07-25 12:44:39.325713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.148 [2024-07-25 12:44:39.335023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.148 [2024-07-25 12:44:39.335533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.148 [2024-07-25 12:44:39.335556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.148 [2024-07-25 12:44:39.335564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.148 [2024-07-25 12:44:39.335764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.148 [2024-07-25 12:44:39.335964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.148 [2024-07-25 12:44:39.335973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.148 [2024-07-25 12:44:39.335979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.148 [2024-07-25 12:44:39.339201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.148 [2024-07-25 12:44:39.348505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.148 [2024-07-25 12:44:39.348986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.148 [2024-07-25 12:44:39.349002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.148 [2024-07-25 12:44:39.349009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.148 [2024-07-25 12:44:39.349209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.148 [2024-07-25 12:44:39.349409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.148 [2024-07-25 12:44:39.349419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.148 [2024-07-25 12:44:39.349426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.148 [2024-07-25 12:44:39.352659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.148 [2024-07-25 12:44:39.361982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.148 [2024-07-25 12:44:39.362489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.148 [2024-07-25 12:44:39.362505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.148 [2024-07-25 12:44:39.362512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.148 [2024-07-25 12:44:39.362717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.148 [2024-07-25 12:44:39.362917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.148 [2024-07-25 12:44:39.362926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.148 [2024-07-25 12:44:39.362936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.148 [2024-07-25 12:44:39.366160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.148 [2024-07-25 12:44:39.375479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.148 [2024-07-25 12:44:39.376014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.148 [2024-07-25 12:44:39.376030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.148 [2024-07-25 12:44:39.376038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.148 [2024-07-25 12:44:39.376237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.148 [2024-07-25 12:44:39.376437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.148 [2024-07-25 12:44:39.376446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.148 [2024-07-25 12:44:39.376452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.148 [2024-07-25 12:44:39.379681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.148 [2024-07-25 12:44:39.389004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.148 [2024-07-25 12:44:39.389649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.148 [2024-07-25 12:44:39.389685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.148 [2024-07-25 12:44:39.389697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.148 [2024-07-25 12:44:39.389918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.148 [2024-07-25 12:44:39.390122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.148 [2024-07-25 12:44:39.390131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.148 [2024-07-25 12:44:39.390138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.148 [2024-07-25 12:44:39.393370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.148 [2024-07-25 12:44:39.402494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.148 [2024-07-25 12:44:39.403134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.148 [2024-07-25 12:44:39.403169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.148 [2024-07-25 12:44:39.403180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.148 [2024-07-25 12:44:39.403398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.148 [2024-07-25 12:44:39.403608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.148 [2024-07-25 12:44:39.403619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.148 [2024-07-25 12:44:39.403626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.148 [2024-07-25 12:44:39.406854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.148 [2024-07-25 12:44:39.415983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.148 [2024-07-25 12:44:39.416419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.148 [2024-07-25 12:44:39.416436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.148 [2024-07-25 12:44:39.416444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.148 [2024-07-25 12:44:39.416649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.148 [2024-07-25 12:44:39.416850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.148 [2024-07-25 12:44:39.416859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.148 [2024-07-25 12:44:39.416867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.148 [2024-07-25 12:44:39.420088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.148 [2024-07-25 12:44:39.429589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.148 [2024-07-25 12:44:39.430188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.148 [2024-07-25 12:44:39.430224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.148 [2024-07-25 12:44:39.430234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.148 [2024-07-25 12:44:39.430452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.148 [2024-07-25 12:44:39.430665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.148 [2024-07-25 12:44:39.430676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.149 [2024-07-25 12:44:39.430683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.149 [2024-07-25 12:44:39.433910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.149 [2024-07-25 12:44:39.443222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.149 [2024-07-25 12:44:39.443733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.149 [2024-07-25 12:44:39.443752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.149 [2024-07-25 12:44:39.443760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.149 [2024-07-25 12:44:39.443960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.149 [2024-07-25 12:44:39.444160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.149 [2024-07-25 12:44:39.444169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.149 [2024-07-25 12:44:39.444176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.149 [2024-07-25 12:44:39.447397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.149 [2024-07-25 12:44:39.456722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.149 [2024-07-25 12:44:39.457208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.149 [2024-07-25 12:44:39.457224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.149 [2024-07-25 12:44:39.457231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.149 [2024-07-25 12:44:39.457430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.149 [2024-07-25 12:44:39.457641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.149 [2024-07-25 12:44:39.457651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.149 [2024-07-25 12:44:39.457658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.149 [2024-07-25 12:44:39.460877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.149 [2024-07-25 12:44:39.470184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.149 [2024-07-25 12:44:39.470614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.149 [2024-07-25 12:44:39.470630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.149 [2024-07-25 12:44:39.470637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.149 [2024-07-25 12:44:39.470838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.149 [2024-07-25 12:44:39.471037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.149 [2024-07-25 12:44:39.471046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.149 [2024-07-25 12:44:39.471053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.149 [2024-07-25 12:44:39.474273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.149 [2024-07-25 12:44:39.483769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.149 [2024-07-25 12:44:39.484369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.149 [2024-07-25 12:44:39.484404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.149 [2024-07-25 12:44:39.484415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.149 [2024-07-25 12:44:39.484639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.149 [2024-07-25 12:44:39.484844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.149 [2024-07-25 12:44:39.484853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.149 [2024-07-25 12:44:39.484860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.149 [2024-07-25 12:44:39.488086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.149 [2024-07-25 12:44:39.497398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.149 [2024-07-25 12:44:39.497906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.149 [2024-07-25 12:44:39.497925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.149 [2024-07-25 12:44:39.497932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.149 [2024-07-25 12:44:39.498133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.149 [2024-07-25 12:44:39.498333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.149 [2024-07-25 12:44:39.498342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.149 [2024-07-25 12:44:39.498348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.149 [2024-07-25 12:44:39.501580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.149 [2024-07-25 12:44:39.510891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.149 [2024-07-25 12:44:39.511328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.149 [2024-07-25 12:44:39.511343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.149 [2024-07-25 12:44:39.511351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.149 [2024-07-25 12:44:39.511554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.149 [2024-07-25 12:44:39.511755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.149 [2024-07-25 12:44:39.511765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.149 [2024-07-25 12:44:39.511771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.149 [2024-07-25 12:44:39.515027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.149 [2024-07-25 12:44:39.524342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.149 [2024-07-25 12:44:39.524872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.149 [2024-07-25 12:44:39.524889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.149 [2024-07-25 12:44:39.524896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.149 [2024-07-25 12:44:39.525095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.149 [2024-07-25 12:44:39.525295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.149 [2024-07-25 12:44:39.525304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.149 [2024-07-25 12:44:39.525310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.149 [2024-07-25 12:44:39.528532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.149 [2024-07-25 12:44:39.537844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.149 [2024-07-25 12:44:39.538364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.149 [2024-07-25 12:44:39.538380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.149 [2024-07-25 12:44:39.538387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.149 [2024-07-25 12:44:39.538590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.149 [2024-07-25 12:44:39.538792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.149 [2024-07-25 12:44:39.538800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.149 [2024-07-25 12:44:39.538807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.149 [2024-07-25 12:44:39.542028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.149 [2024-07-25 12:44:39.551335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.149 [2024-07-25 12:44:39.551836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.149 [2024-07-25 12:44:39.551852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.149 [2024-07-25 12:44:39.551863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.149 [2024-07-25 12:44:39.552062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.149 [2024-07-25 12:44:39.552262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.149 [2024-07-25 12:44:39.552270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.149 [2024-07-25 12:44:39.552276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.149 [2024-07-25 12:44:39.555498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.149 [2024-07-25 12:44:39.564813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.149 [2024-07-25 12:44:39.565307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.149 [2024-07-25 12:44:39.565322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.149 [2024-07-25 12:44:39.565329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.150 [2024-07-25 12:44:39.565528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.150 [2024-07-25 12:44:39.565733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.150 [2024-07-25 12:44:39.565742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.150 [2024-07-25 12:44:39.565749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.412 [2024-07-25 12:44:39.568967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.412 [2024-07-25 12:44:39.578269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.412 [2024-07-25 12:44:39.578751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.412 [2024-07-25 12:44:39.578767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.412 [2024-07-25 12:44:39.578774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.412 [2024-07-25 12:44:39.578974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.412 [2024-07-25 12:44:39.579173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.412 [2024-07-25 12:44:39.579183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.412 [2024-07-25 12:44:39.579190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.412 [2024-07-25 12:44:39.582409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.413 [2024-07-25 12:44:39.591716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.413 [2024-07-25 12:44:39.592205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.413 [2024-07-25 12:44:39.592220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.413 [2024-07-25 12:44:39.592227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.413 [2024-07-25 12:44:39.592426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.413 [2024-07-25 12:44:39.592631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.413 [2024-07-25 12:44:39.592644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.413 [2024-07-25 12:44:39.592651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.413 [2024-07-25 12:44:39.595874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.413 [2024-07-25 12:44:39.605178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.413 [2024-07-25 12:44:39.605643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.413 [2024-07-25 12:44:39.605658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.413 [2024-07-25 12:44:39.605666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.414 [2024-07-25 12:44:39.605865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.414 [2024-07-25 12:44:39.606064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.414 [2024-07-25 12:44:39.606073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.414 [2024-07-25 12:44:39.606080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.414 [2024-07-25 12:44:39.609300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.414 [2024-07-25 12:44:39.618813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.414 [2024-07-25 12:44:39.619340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.414 [2024-07-25 12:44:39.619356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.414 [2024-07-25 12:44:39.619364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.414 [2024-07-25 12:44:39.619568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.415 [2024-07-25 12:44:39.619769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.415 [2024-07-25 12:44:39.619779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.415 [2024-07-25 12:44:39.619785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.415 [2024-07-25 12:44:39.623005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.415 [2024-07-25 12:44:39.632320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.415 [2024-07-25 12:44:39.632864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.415 [2024-07-25 12:44:39.632879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.415 [2024-07-25 12:44:39.632886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.415 [2024-07-25 12:44:39.633085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.415 [2024-07-25 12:44:39.633285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.415 [2024-07-25 12:44:39.633294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.415 [2024-07-25 12:44:39.633300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.415 [2024-07-25 12:44:39.636524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.415 [2024-07-25 12:44:39.645846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.416 [2024-07-25 12:44:39.646467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.416 [2024-07-25 12:44:39.646503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.416 [2024-07-25 12:44:39.646514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.416 [2024-07-25 12:44:39.646741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.416 [2024-07-25 12:44:39.646946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.416 [2024-07-25 12:44:39.646955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.416 [2024-07-25 12:44:39.646962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.416 [2024-07-25 12:44:39.650188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.416 [2024-07-25 12:44:39.659321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.416 [2024-07-25 12:44:39.659880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.416 [2024-07-25 12:44:39.659898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.416 [2024-07-25 12:44:39.659906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.416 [2024-07-25 12:44:39.660106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.416 [2024-07-25 12:44:39.660306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.417 [2024-07-25 12:44:39.660315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.417 [2024-07-25 12:44:39.660322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.417 [2024-07-25 12:44:39.663544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.417 [2024-07-25 12:44:39.672863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.417 [2024-07-25 12:44:39.673393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.417 [2024-07-25 12:44:39.673409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.417 [2024-07-25 12:44:39.673417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.417 [2024-07-25 12:44:39.673620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.417 [2024-07-25 12:44:39.673820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.417 [2024-07-25 12:44:39.673830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.417 [2024-07-25 12:44:39.673837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.417 [2024-07-25 12:44:39.677059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.417 [2024-07-25 12:44:39.686365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.417 [2024-07-25 12:44:39.686857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.417 [2024-07-25 12:44:39.686873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.418 [2024-07-25 12:44:39.686881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.418 [2024-07-25 12:44:39.687084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.418 [2024-07-25 12:44:39.687284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.418 [2024-07-25 12:44:39.687293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.418 [2024-07-25 12:44:39.687299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.418 [2024-07-25 12:44:39.690521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.418 [2024-07-25 12:44:39.699827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.418 [2024-07-25 12:44:39.700355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.418 [2024-07-25 12:44:39.700372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.418 [2024-07-25 12:44:39.700379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.418 [2024-07-25 12:44:39.700582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.418 [2024-07-25 12:44:39.700782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.418 [2024-07-25 12:44:39.700792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.418 [2024-07-25 12:44:39.700798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.418 [2024-07-25 12:44:39.704019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.418 [2024-07-25 12:44:39.713333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.418 [2024-07-25 12:44:39.713859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.418 [2024-07-25 12:44:39.713874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.418 [2024-07-25 12:44:39.713881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.418 [2024-07-25 12:44:39.714080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.419 [2024-07-25 12:44:39.714280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.419 [2024-07-25 12:44:39.714289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.419 [2024-07-25 12:44:39.714296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.419 [2024-07-25 12:44:39.717525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.419 [2024-07-25 12:44:39.726866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.419 [2024-07-25 12:44:39.727401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.419 [2024-07-25 12:44:39.727417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.419 [2024-07-25 12:44:39.727425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.419 [2024-07-25 12:44:39.727628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.419 [2024-07-25 12:44:39.727829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.419 [2024-07-25 12:44:39.727838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.419 [2024-07-25 12:44:39.727848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.419 [2024-07-25 12:44:39.731068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.419 [2024-07-25 12:44:39.740372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.419 [2024-07-25 12:44:39.740879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.419 [2024-07-25 12:44:39.740894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.419 [2024-07-25 12:44:39.740902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.419 [2024-07-25 12:44:39.741101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.419 [2024-07-25 12:44:39.741301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.420 [2024-07-25 12:44:39.741310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.420 [2024-07-25 12:44:39.741316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.420 [2024-07-25 12:44:39.744535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.420 [2024-07-25 12:44:39.753848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.420 [2024-07-25 12:44:39.754374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.420 [2024-07-25 12:44:39.754389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.420 [2024-07-25 12:44:39.754396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.420 [2024-07-25 12:44:39.754599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.420 [2024-07-25 12:44:39.754799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.420 [2024-07-25 12:44:39.754807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.420 [2024-07-25 12:44:39.754814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.420 [2024-07-25 12:44:39.758034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.420 [2024-07-25 12:44:39.767335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.420 [2024-07-25 12:44:39.767801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.420 [2024-07-25 12:44:39.767816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.420 [2024-07-25 12:44:39.767823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.420 [2024-07-25 12:44:39.768023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.421 [2024-07-25 12:44:39.768225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.421 [2024-07-25 12:44:39.768234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.421 [2024-07-25 12:44:39.768241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.421 [2024-07-25 12:44:39.771463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.421 [2024-07-25 12:44:39.780963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.421 [2024-07-25 12:44:39.781499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.421 [2024-07-25 12:44:39.781514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.421 [2024-07-25 12:44:39.781521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.421 [2024-07-25 12:44:39.781723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.421 [2024-07-25 12:44:39.781924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.421 [2024-07-25 12:44:39.781933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.421 [2024-07-25 12:44:39.781940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.421 [2024-07-25 12:44:39.785157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.421 [2024-07-25 12:44:39.794458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.421 [2024-07-25 12:44:39.794987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.421 [2024-07-25 12:44:39.795002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.421 [2024-07-25 12:44:39.795009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.421 [2024-07-25 12:44:39.795208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.421 [2024-07-25 12:44:39.795407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.421 [2024-07-25 12:44:39.795416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.421 [2024-07-25 12:44:39.795423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.421 [2024-07-25 12:44:39.798645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.421 [2024-07-25 12:44:39.807950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.421 [2024-07-25 12:44:39.808435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.421 [2024-07-25 12:44:39.808451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.421 [2024-07-25 12:44:39.808458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.421 [2024-07-25 12:44:39.808661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.421 [2024-07-25 12:44:39.808861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.421 [2024-07-25 12:44:39.808870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.421 [2024-07-25 12:44:39.808876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.421 [2024-07-25 12:44:39.812095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.421 [2024-07-25 12:44:39.821401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.421 [2024-07-25 12:44:39.821871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.421 [2024-07-25 12:44:39.821888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.422 [2024-07-25 12:44:39.821895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.422 [2024-07-25 12:44:39.822095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.422 [2024-07-25 12:44:39.822298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.422 [2024-07-25 12:44:39.822308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.422 [2024-07-25 12:44:39.822314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.422 [2024-07-25 12:44:39.825533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.686 [2024-07-25 12:44:39.835040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.686 [2024-07-25 12:44:39.835521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.686 [2024-07-25 12:44:39.835536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.686 [2024-07-25 12:44:39.835542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.686 [2024-07-25 12:44:39.835748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.686 [2024-07-25 12:44:39.835948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.686 [2024-07-25 12:44:39.835957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.686 [2024-07-25 12:44:39.835963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.686 [2024-07-25 12:44:39.839181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.686 [2024-07-25 12:44:39.848489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.686 [2024-07-25 12:44:39.848985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.686 [2024-07-25 12:44:39.849000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.686 [2024-07-25 12:44:39.849007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.686 [2024-07-25 12:44:39.849206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.686 [2024-07-25 12:44:39.849406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.686 [2024-07-25 12:44:39.849415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.686 [2024-07-25 12:44:39.849421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.686 [2024-07-25 12:44:39.852653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.686 [2024-07-25 12:44:39.861962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.686 [2024-07-25 12:44:39.862451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.686 [2024-07-25 12:44:39.862466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.686 [2024-07-25 12:44:39.862473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.686 [2024-07-25 12:44:39.862676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.686 [2024-07-25 12:44:39.862876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.686 [2024-07-25 12:44:39.862885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.686 [2024-07-25 12:44:39.862892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.686 [2024-07-25 12:44:39.866121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.686 [2024-07-25 12:44:39.875516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.686 [2024-07-25 12:44:39.876025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.686 [2024-07-25 12:44:39.876042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.686 [2024-07-25 12:44:39.876049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.686 [2024-07-25 12:44:39.876248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.686 [2024-07-25 12:44:39.876448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.686 [2024-07-25 12:44:39.876457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.686 [2024-07-25 12:44:39.876464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.686 [2024-07-25 12:44:39.879688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.686 [2024-07-25 12:44:39.888989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.686 [2024-07-25 12:44:39.889499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.686 [2024-07-25 12:44:39.889514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.686 [2024-07-25 12:44:39.889521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.686 [2024-07-25 12:44:39.889725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.686 [2024-07-25 12:44:39.889924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.686 [2024-07-25 12:44:39.889933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.686 [2024-07-25 12:44:39.889940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.686 [2024-07-25 12:44:39.893159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.686 [2024-07-25 12:44:39.902467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.686 [2024-07-25 12:44:39.902966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.686 [2024-07-25 12:44:39.902981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.686 [2024-07-25 12:44:39.902988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.687 [2024-07-25 12:44:39.903187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.687 [2024-07-25 12:44:39.903386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.687 [2024-07-25 12:44:39.903395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.687 [2024-07-25 12:44:39.903402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.687 [2024-07-25 12:44:39.906628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.687 [2024-07-25 12:44:39.915933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.687 [2024-07-25 12:44:39.916455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.687 [2024-07-25 12:44:39.916470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.687 [2024-07-25 12:44:39.916480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.687 [2024-07-25 12:44:39.916685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.687 [2024-07-25 12:44:39.916886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.687 [2024-07-25 12:44:39.916895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.687 [2024-07-25 12:44:39.916901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.687 [2024-07-25 12:44:39.920119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.687 [2024-07-25 12:44:39.929450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.687 [2024-07-25 12:44:39.929946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.687 [2024-07-25 12:44:39.929962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.687 [2024-07-25 12:44:39.929969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.687 [2024-07-25 12:44:39.930169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.687 [2024-07-25 12:44:39.930368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.687 [2024-07-25 12:44:39.930377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.687 [2024-07-25 12:44:39.930383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.687 [2024-07-25 12:44:39.933608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.687 [2024-07-25 12:44:39.942927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.687 [2024-07-25 12:44:39.943436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.687 [2024-07-25 12:44:39.943451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.687 [2024-07-25 12:44:39.943458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.687 [2024-07-25 12:44:39.943662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.687 [2024-07-25 12:44:39.943862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.687 [2024-07-25 12:44:39.943871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.687 [2024-07-25 12:44:39.943878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.687 [2024-07-25 12:44:39.947094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.687 [2024-07-25 12:44:39.956402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.687 [2024-07-25 12:44:39.956934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.687 [2024-07-25 12:44:39.956949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.687 [2024-07-25 12:44:39.956956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.687 [2024-07-25 12:44:39.957156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.687 [2024-07-25 12:44:39.957355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.687 [2024-07-25 12:44:39.957367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.687 [2024-07-25 12:44:39.957373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.687 [2024-07-25 12:44:39.960600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.687 [2024-07-25 12:44:39.969897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.687 [2024-07-25 12:44:39.970425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.687 [2024-07-25 12:44:39.970440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.687 [2024-07-25 12:44:39.970447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.687 [2024-07-25 12:44:39.970651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.687 [2024-07-25 12:44:39.970852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.687 [2024-07-25 12:44:39.970861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.687 [2024-07-25 12:44:39.970868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.687 [2024-07-25 12:44:39.974085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.687 [2024-07-25 12:44:39.983386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.687 [2024-07-25 12:44:39.983773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.687 [2024-07-25 12:44:39.983789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.687 [2024-07-25 12:44:39.983796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.687 [2024-07-25 12:44:39.983995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.687 [2024-07-25 12:44:39.984194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.687 [2024-07-25 12:44:39.984203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.687 [2024-07-25 12:44:39.984210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.687 [2024-07-25 12:44:39.987430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.687 [2024-07-25 12:44:39.996921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.687 [2024-07-25 12:44:39.997407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.687 [2024-07-25 12:44:39.997422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.687 [2024-07-25 12:44:39.997429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.687 [2024-07-25 12:44:39.997633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.687 [2024-07-25 12:44:39.997833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.687 [2024-07-25 12:44:39.997842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.687 [2024-07-25 12:44:39.997849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.687 [2024-07-25 12:44:40.001067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.687 [2024-07-25 12:44:40.010429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.687 [2024-07-25 12:44:40.010995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.687 [2024-07-25 12:44:40.011031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.687 [2024-07-25 12:44:40.011043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.687 [2024-07-25 12:44:40.011263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.688 [2024-07-25 12:44:40.011466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.688 [2024-07-25 12:44:40.011474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.688 [2024-07-25 12:44:40.011482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.688 [2024-07-25 12:44:40.014716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.688 [2024-07-25 12:44:40.024043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.688 [2024-07-25 12:44:40.024561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.688 [2024-07-25 12:44:40.024579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.688 [2024-07-25 12:44:40.024587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.688 [2024-07-25 12:44:40.024788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.688 [2024-07-25 12:44:40.024988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.688 [2024-07-25 12:44:40.024996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.688 [2024-07-25 12:44:40.025003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.688 [2024-07-25 12:44:40.028224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.688 [2024-07-25 12:44:40.037529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.688 [2024-07-25 12:44:40.038159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.688 [2024-07-25 12:44:40.038195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.688 [2024-07-25 12:44:40.038205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.688 [2024-07-25 12:44:40.038423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.688 [2024-07-25 12:44:40.038633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.688 [2024-07-25 12:44:40.038643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.688 [2024-07-25 12:44:40.038650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.688 [2024-07-25 12:44:40.041879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.688 [2024-07-25 12:44:40.051001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.688 [2024-07-25 12:44:40.051614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.688 [2024-07-25 12:44:40.051649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.688 [2024-07-25 12:44:40.051660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.688 [2024-07-25 12:44:40.051882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.688 [2024-07-25 12:44:40.052085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.688 [2024-07-25 12:44:40.052095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.688 [2024-07-25 12:44:40.052102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.688 [2024-07-25 12:44:40.055344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.688 [2024-07-25 12:44:40.064466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.688 [2024-07-25 12:44:40.065096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.688 [2024-07-25 12:44:40.065132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.688 [2024-07-25 12:44:40.065142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.688 [2024-07-25 12:44:40.065360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.688 [2024-07-25 12:44:40.065571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.688 [2024-07-25 12:44:40.065582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.688 [2024-07-25 12:44:40.065589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.688 [2024-07-25 12:44:40.068819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.688 [2024-07-25 12:44:40.077941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.688 [2024-07-25 12:44:40.078481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.688 [2024-07-25 12:44:40.078499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.688 [2024-07-25 12:44:40.078507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.688 [2024-07-25 12:44:40.078712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.688 [2024-07-25 12:44:40.078913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.688 [2024-07-25 12:44:40.078921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.688 [2024-07-25 12:44:40.078928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.688 [2024-07-25 12:44:40.082155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.688 [2024-07-25 12:44:40.091468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.688 [2024-07-25 12:44:40.092003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.688 [2024-07-25 12:44:40.092020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.688 [2024-07-25 12:44:40.092027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.688 [2024-07-25 12:44:40.092227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.688 [2024-07-25 12:44:40.092427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.688 [2024-07-25 12:44:40.092436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.688 [2024-07-25 12:44:40.092447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.688 [2024-07-25 12:44:40.095673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.688 [2024-07-25 12:44:40.104977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.949 [2024-07-25 12:44:40.105504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-07-25 12:44:40.105520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.949 [2024-07-25 12:44:40.105527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.949 [2024-07-25 12:44:40.105734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.949 [2024-07-25 12:44:40.105934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.949 [2024-07-25 12:44:40.105943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.949 [2024-07-25 12:44:40.105949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.949 [2024-07-25 12:44:40.109170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.949 [2024-07-25 12:44:40.118477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.949 [2024-07-25 12:44:40.118977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-07-25 12:44:40.118993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.950 [2024-07-25 12:44:40.119000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.950 [2024-07-25 12:44:40.119199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.950 [2024-07-25 12:44:40.119398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.950 [2024-07-25 12:44:40.119408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.950 [2024-07-25 12:44:40.119414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.950 [2024-07-25 12:44:40.122855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.950 [2024-07-25 12:44:40.132018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.950 [2024-07-25 12:44:40.132533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-07-25 12:44:40.132556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.950 [2024-07-25 12:44:40.132564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.950 [2024-07-25 12:44:40.132765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.950 [2024-07-25 12:44:40.132965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.950 [2024-07-25 12:44:40.132973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.950 [2024-07-25 12:44:40.132980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.950 [2024-07-25 12:44:40.136198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.950 [2024-07-25 12:44:40.145535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.950 [2024-07-25 12:44:40.146085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-07-25 12:44:40.146102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.950 [2024-07-25 12:44:40.146109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.950 [2024-07-25 12:44:40.146309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.950 [2024-07-25 12:44:40.146508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.950 [2024-07-25 12:44:40.146517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.950 [2024-07-25 12:44:40.146524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.950 [2024-07-25 12:44:40.149751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.950 [2024-07-25 12:44:40.159070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.950 [2024-07-25 12:44:40.159643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-07-25 12:44:40.159679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.950 [2024-07-25 12:44:40.159689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.950 [2024-07-25 12:44:40.159907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.950 [2024-07-25 12:44:40.160111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.950 [2024-07-25 12:44:40.160120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.950 [2024-07-25 12:44:40.160127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.950 [2024-07-25 12:44:40.163360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.950 [2024-07-25 12:44:40.172671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.950 [2024-07-25 12:44:40.173287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-07-25 12:44:40.173322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.950 [2024-07-25 12:44:40.173332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.950 [2024-07-25 12:44:40.173559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.950 [2024-07-25 12:44:40.173763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.950 [2024-07-25 12:44:40.173772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.950 [2024-07-25 12:44:40.173779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.950 [2024-07-25 12:44:40.177004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.950 [2024-07-25 12:44:40.186127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.950 [2024-07-25 12:44:40.186659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-07-25 12:44:40.186694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.950 [2024-07-25 12:44:40.186706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.950 [2024-07-25 12:44:40.186933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.950 [2024-07-25 12:44:40.187137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.950 [2024-07-25 12:44:40.187146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.950 [2024-07-25 12:44:40.187153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.950 [2024-07-25 12:44:40.190387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.950 [2024-07-25 12:44:40.199701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.950 [2024-07-25 12:44:40.200307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-07-25 12:44:40.200342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.950 [2024-07-25 12:44:40.200352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.950 [2024-07-25 12:44:40.200578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.950 [2024-07-25 12:44:40.200782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.950 [2024-07-25 12:44:40.200792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.950 [2024-07-25 12:44:40.200799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.950 [2024-07-25 12:44:40.204024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.950 [2024-07-25 12:44:40.213144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.950 [2024-07-25 12:44:40.213764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-07-25 12:44:40.213799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.950 [2024-07-25 12:44:40.213810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.950 [2024-07-25 12:44:40.214028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.950 [2024-07-25 12:44:40.214231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.950 [2024-07-25 12:44:40.214240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.950 [2024-07-25 12:44:40.214247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.950 [2024-07-25 12:44:40.217487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.950 [2024-07-25 12:44:40.226613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.950 [2024-07-25 12:44:40.227234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-07-25 12:44:40.227269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.950 [2024-07-25 12:44:40.227279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.950 [2024-07-25 12:44:40.227497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.950 [2024-07-25 12:44:40.227709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.950 [2024-07-25 12:44:40.227720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.950 [2024-07-25 12:44:40.227727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.950 [2024-07-25 12:44:40.230956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.950 [2024-07-25 12:44:40.240076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.950 [2024-07-25 12:44:40.240674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-07-25 12:44:40.240709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.950 [2024-07-25 12:44:40.240721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.950 [2024-07-25 12:44:40.240940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.950 [2024-07-25 12:44:40.241143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.950 [2024-07-25 12:44:40.241152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.950 [2024-07-25 12:44:40.241160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.951 [2024-07-25 12:44:40.244392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.951 [2024-07-25 12:44:40.253710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.951 [2024-07-25 12:44:40.254250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-07-25 12:44:40.254268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.951 [2024-07-25 12:44:40.254276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.951 [2024-07-25 12:44:40.254476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.951 [2024-07-25 12:44:40.254682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.951 [2024-07-25 12:44:40.254692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.951 [2024-07-25 12:44:40.254699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.951 [2024-07-25 12:44:40.257997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.951 [2024-07-25 12:44:40.267308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.951 [2024-07-25 12:44:40.267932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-07-25 12:44:40.267968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.951 [2024-07-25 12:44:40.267977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.951 [2024-07-25 12:44:40.268196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.951 [2024-07-25 12:44:40.268399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.951 [2024-07-25 12:44:40.268408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.951 [2024-07-25 12:44:40.268415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.951 [2024-07-25 12:44:40.271650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.951 [2024-07-25 12:44:40.280772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.951 [2024-07-25 12:44:40.281398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-07-25 12:44:40.281437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.951 [2024-07-25 12:44:40.281448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.951 [2024-07-25 12:44:40.281676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.951 [2024-07-25 12:44:40.281881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.951 [2024-07-25 12:44:40.281890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.951 [2024-07-25 12:44:40.281897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.951 [2024-07-25 12:44:40.285123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.951 [2024-07-25 12:44:40.294243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.951 [2024-07-25 12:44:40.294867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-07-25 12:44:40.294903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.951 [2024-07-25 12:44:40.294913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.951 [2024-07-25 12:44:40.295131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.951 [2024-07-25 12:44:40.295334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.951 [2024-07-25 12:44:40.295343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.951 [2024-07-25 12:44:40.295350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.951 [2024-07-25 12:44:40.298585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.951 [2024-07-25 12:44:40.307706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.951 [2024-07-25 12:44:40.308194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-07-25 12:44:40.308229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.951 [2024-07-25 12:44:40.308239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.951 [2024-07-25 12:44:40.308457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.951 [2024-07-25 12:44:40.308669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.951 [2024-07-25 12:44:40.308679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.951 [2024-07-25 12:44:40.308686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.951 [2024-07-25 12:44:40.311911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.951 [2024-07-25 12:44:40.321227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.951 [2024-07-25 12:44:40.321741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-07-25 12:44:40.321777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.951 [2024-07-25 12:44:40.321788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.951 [2024-07-25 12:44:40.322008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.951 [2024-07-25 12:44:40.322215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.951 [2024-07-25 12:44:40.322225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.951 [2024-07-25 12:44:40.322232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.951 [2024-07-25 12:44:40.325462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.951 [2024-07-25 12:44:40.334774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.951 [2024-07-25 12:44:40.335412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-07-25 12:44:40.335447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.951 [2024-07-25 12:44:40.335457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.951 [2024-07-25 12:44:40.335683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.951 [2024-07-25 12:44:40.335887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.951 [2024-07-25 12:44:40.335897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.951 [2024-07-25 12:44:40.335904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.951 [2024-07-25 12:44:40.339128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.951 [2024-07-25 12:44:40.348279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.951 [2024-07-25 12:44:40.348897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-07-25 12:44:40.348933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.951 [2024-07-25 12:44:40.348944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.951 [2024-07-25 12:44:40.349164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.951 [2024-07-25 12:44:40.349367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.951 [2024-07-25 12:44:40.349376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.951 [2024-07-25 12:44:40.349383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.951 [2024-07-25 12:44:40.352616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.951 [2024-07-25 12:44:40.361744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.951 [2024-07-25 12:44:40.362362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-07-25 12:44:40.362397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:06.951 [2024-07-25 12:44:40.362407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:06.951 [2024-07-25 12:44:40.362632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:06.951 [2024-07-25 12:44:40.362836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.951 [2024-07-25 12:44:40.362845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.951 [2024-07-25 12:44:40.362853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.951 [2024-07-25 12:44:40.366078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.212 [2024-07-25 12:44:40.375209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.212 [2024-07-25 12:44:40.375834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.212 [2024-07-25 12:44:40.375870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.212 [2024-07-25 12:44:40.375880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.212 [2024-07-25 12:44:40.376098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.212 [2024-07-25 12:44:40.376301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.212 [2024-07-25 12:44:40.376310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.212 [2024-07-25 12:44:40.376317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.212 [2024-07-25 12:44:40.379544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.212 [2024-07-25 12:44:40.388667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.212 [2024-07-25 12:44:40.389181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.212 [2024-07-25 12:44:40.389199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.212 [2024-07-25 12:44:40.389207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.212 [2024-07-25 12:44:40.389407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.212 [2024-07-25 12:44:40.389614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.212 [2024-07-25 12:44:40.389625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.212 [2024-07-25 12:44:40.389631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.212 [2024-07-25 12:44:40.392855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.212 [2024-07-25 12:44:40.402158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.212 [2024-07-25 12:44:40.402825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.212 [2024-07-25 12:44:40.402861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.212 [2024-07-25 12:44:40.402870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.212 [2024-07-25 12:44:40.403088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.212 [2024-07-25 12:44:40.403292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.212 [2024-07-25 12:44:40.403301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.212 [2024-07-25 12:44:40.403308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.212 [2024-07-25 12:44:40.406539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.212 [2024-07-25 12:44:40.415661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.212 [2024-07-25 12:44:40.416251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.212 [2024-07-25 12:44:40.416286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.212 [2024-07-25 12:44:40.416302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.212 [2024-07-25 12:44:40.416521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.212 [2024-07-25 12:44:40.416733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.212 [2024-07-25 12:44:40.416744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.212 [2024-07-25 12:44:40.416751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.212 [2024-07-25 12:44:40.419983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.212 [2024-07-25 12:44:40.429291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.213 [2024-07-25 12:44:40.429899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.213 [2024-07-25 12:44:40.429934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.213 [2024-07-25 12:44:40.429944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.213 [2024-07-25 12:44:40.430162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.213 [2024-07-25 12:44:40.430366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.213 [2024-07-25 12:44:40.430375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.213 [2024-07-25 12:44:40.430382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.213 [2024-07-25 12:44:40.433615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.213 [2024-07-25 12:44:40.442923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.213 [2024-07-25 12:44:40.443443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.213 [2024-07-25 12:44:40.443461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.213 [2024-07-25 12:44:40.443468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.213 [2024-07-25 12:44:40.443675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.213 [2024-07-25 12:44:40.443876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.213 [2024-07-25 12:44:40.443884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.213 [2024-07-25 12:44:40.443891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.213 [2024-07-25 12:44:40.447111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.213 [2024-07-25 12:44:40.456422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.213 [2024-07-25 12:44:40.457003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.213 [2024-07-25 12:44:40.457039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.213 [2024-07-25 12:44:40.457049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.213 [2024-07-25 12:44:40.457267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.213 [2024-07-25 12:44:40.457471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.213 [2024-07-25 12:44:40.457480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.213 [2024-07-25 12:44:40.457491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.213 [2024-07-25 12:44:40.460726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.213 [2024-07-25 12:44:40.470038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.213 [2024-07-25 12:44:40.470642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.213 [2024-07-25 12:44:40.470677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.213 [2024-07-25 12:44:40.470688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.213 [2024-07-25 12:44:40.470908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.213 [2024-07-25 12:44:40.471111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.213 [2024-07-25 12:44:40.471120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.213 [2024-07-25 12:44:40.471127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.213 [2024-07-25 12:44:40.474358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.213 [2024-07-25 12:44:40.483670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.213 [2024-07-25 12:44:40.484258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.213 [2024-07-25 12:44:40.484293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.213 [2024-07-25 12:44:40.484303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.213 [2024-07-25 12:44:40.484521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.213 [2024-07-25 12:44:40.484734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.213 [2024-07-25 12:44:40.484745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.213 [2024-07-25 12:44:40.484752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.213 [2024-07-25 12:44:40.487978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.213 [2024-07-25 12:44:40.497289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.213 [2024-07-25 12:44:40.497879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.213 [2024-07-25 12:44:40.497915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.213 [2024-07-25 12:44:40.497926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.213 [2024-07-25 12:44:40.498144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.213 [2024-07-25 12:44:40.498348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.213 [2024-07-25 12:44:40.498356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.213 [2024-07-25 12:44:40.498363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.213 [2024-07-25 12:44:40.501596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.213 [2024-07-25 12:44:40.510907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.213 [2024-07-25 12:44:40.511563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.213 [2024-07-25 12:44:40.511599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.213 [2024-07-25 12:44:40.511610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.213 [2024-07-25 12:44:40.511830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.213 [2024-07-25 12:44:40.512033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.213 [2024-07-25 12:44:40.512043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.213 [2024-07-25 12:44:40.512050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.213 [2024-07-25 12:44:40.515285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.213 [2024-07-25 12:44:40.524412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.213 [2024-07-25 12:44:40.525036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.213 [2024-07-25 12:44:40.525072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.213 [2024-07-25 12:44:40.525082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.213 [2024-07-25 12:44:40.525301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.213 [2024-07-25 12:44:40.525504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.213 [2024-07-25 12:44:40.525514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.213 [2024-07-25 12:44:40.525521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.213 [2024-07-25 12:44:40.528754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.213 [2024-07-25 12:44:40.537875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.213 [2024-07-25 12:44:40.538379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.213 [2024-07-25 12:44:40.538397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.213 [2024-07-25 12:44:40.538405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.213 [2024-07-25 12:44:40.538610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.213 [2024-07-25 12:44:40.538810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.213 [2024-07-25 12:44:40.538819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.213 [2024-07-25 12:44:40.538826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.213 [2024-07-25 12:44:40.542046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.213 [2024-07-25 12:44:40.551347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.213 [2024-07-25 12:44:40.551764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.213 [2024-07-25 12:44:40.551782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.213 [2024-07-25 12:44:40.551789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.213 [2024-07-25 12:44:40.551993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.213 [2024-07-25 12:44:40.552193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.214 [2024-07-25 12:44:40.552202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.214 [2024-07-25 12:44:40.552210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.214 [2024-07-25 12:44:40.555464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.214 [2024-07-25 12:44:40.564969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.214 [2024-07-25 12:44:40.565498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.214 [2024-07-25 12:44:40.565513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.214 [2024-07-25 12:44:40.565520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.214 [2024-07-25 12:44:40.565726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.214 [2024-07-25 12:44:40.565926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.214 [2024-07-25 12:44:40.565934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.214 [2024-07-25 12:44:40.565941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.214 [2024-07-25 12:44:40.569161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.214 [2024-07-25 12:44:40.578464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.214 [2024-07-25 12:44:40.579082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.214 [2024-07-25 12:44:40.579118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.214 [2024-07-25 12:44:40.579128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.214 [2024-07-25 12:44:40.579346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.214 [2024-07-25 12:44:40.579558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.214 [2024-07-25 12:44:40.579568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.214 [2024-07-25 12:44:40.579576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.214 [2024-07-25 12:44:40.582804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.214 [2024-07-25 12:44:40.591926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.214 [2024-07-25 12:44:40.592469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.214 [2024-07-25 12:44:40.592487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.214 [2024-07-25 12:44:40.592495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.214 [2024-07-25 12:44:40.592699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.214 [2024-07-25 12:44:40.592901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.214 [2024-07-25 12:44:40.592910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.214 [2024-07-25 12:44:40.592917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.214 [2024-07-25 12:44:40.596144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.214 [2024-07-25 12:44:40.605454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.214 [2024-07-25 12:44:40.605986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.214 [2024-07-25 12:44:40.606002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.214 [2024-07-25 12:44:40.606009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.214 [2024-07-25 12:44:40.606209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.214 [2024-07-25 12:44:40.606408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.214 [2024-07-25 12:44:40.606416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.214 [2024-07-25 12:44:40.606423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.214 [2024-07-25 12:44:40.609647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.214 [2024-07-25 12:44:40.618954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.214 [2024-07-25 12:44:40.619490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.214 [2024-07-25 12:44:40.619505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.214 [2024-07-25 12:44:40.619513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.214 [2024-07-25 12:44:40.619717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.214 [2024-07-25 12:44:40.619917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.214 [2024-07-25 12:44:40.619925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.214 [2024-07-25 12:44:40.619932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.214 [2024-07-25 12:44:40.623150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.475 [2024-07-25 12:44:40.632452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.475 [2024-07-25 12:44:40.632960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.475 [2024-07-25 12:44:40.632975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.475 [2024-07-25 12:44:40.632983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.475 [2024-07-25 12:44:40.633182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.475 [2024-07-25 12:44:40.633382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.475 [2024-07-25 12:44:40.633391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.475 [2024-07-25 12:44:40.633398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.475 [2024-07-25 12:44:40.636624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.475 [2024-07-25 12:44:40.645926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.475 [2024-07-25 12:44:40.646436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.475 [2024-07-25 12:44:40.646455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.475 [2024-07-25 12:44:40.646462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.475 [2024-07-25 12:44:40.646666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.475 [2024-07-25 12:44:40.646866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.475 [2024-07-25 12:44:40.646876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.475 [2024-07-25 12:44:40.646882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.475 [2024-07-25 12:44:40.650102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.475 [2024-07-25 12:44:40.659415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.475 [2024-07-25 12:44:40.660009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.475 [2024-07-25 12:44:40.660044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.475 [2024-07-25 12:44:40.660054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.475 [2024-07-25 12:44:40.660272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.475 [2024-07-25 12:44:40.660475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.475 [2024-07-25 12:44:40.660485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.475 [2024-07-25 12:44:40.660492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.475 [2024-07-25 12:44:40.663725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.475 [2024-07-25 12:44:40.673032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.475 [2024-07-25 12:44:40.673645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.475 [2024-07-25 12:44:40.673680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.475 [2024-07-25 12:44:40.673691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.475 [2024-07-25 12:44:40.673910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.475 [2024-07-25 12:44:40.674113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.475 [2024-07-25 12:44:40.674123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.475 [2024-07-25 12:44:40.674130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.475 [2024-07-25 12:44:40.677362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.475 [2024-07-25 12:44:40.686487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.475 [2024-07-25 12:44:40.687100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.475 [2024-07-25 12:44:40.687136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.475 [2024-07-25 12:44:40.687146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.476 [2024-07-25 12:44:40.687363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.476 [2024-07-25 12:44:40.687579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.476 [2024-07-25 12:44:40.687589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.476 [2024-07-25 12:44:40.687596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.476 [2024-07-25 12:44:40.690820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.476 [2024-07-25 12:44:40.699938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.476 [2024-07-25 12:44:40.700576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.476 [2024-07-25 12:44:40.700612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.476 [2024-07-25 12:44:40.700623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.476 [2024-07-25 12:44:40.700842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.476 [2024-07-25 12:44:40.701046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.476 [2024-07-25 12:44:40.701055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.476 [2024-07-25 12:44:40.701063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.476 [2024-07-25 12:44:40.704299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.476 [2024-07-25 12:44:40.713423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.476 [2024-07-25 12:44:40.714056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.476 [2024-07-25 12:44:40.714091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.476 [2024-07-25 12:44:40.714101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.476 [2024-07-25 12:44:40.714319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.476 [2024-07-25 12:44:40.714523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.476 [2024-07-25 12:44:40.714531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.476 [2024-07-25 12:44:40.714538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.476 [2024-07-25 12:44:40.717781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.476 [2024-07-25 12:44:40.726902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.476 [2024-07-25 12:44:40.727447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.476 [2024-07-25 12:44:40.727466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.476 [2024-07-25 12:44:40.727473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.476 [2024-07-25 12:44:40.727680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.476 [2024-07-25 12:44:40.727881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.476 [2024-07-25 12:44:40.727890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.476 [2024-07-25 12:44:40.727896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.476 [2024-07-25 12:44:40.731119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.476 [2024-07-25 12:44:40.740430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.476 [2024-07-25 12:44:40.740942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.476 [2024-07-25 12:44:40.740958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.476 [2024-07-25 12:44:40.740965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.476 [2024-07-25 12:44:40.741164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.476 [2024-07-25 12:44:40.741364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.476 [2024-07-25 12:44:40.741372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.476 [2024-07-25 12:44:40.741379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.476 [2024-07-25 12:44:40.744603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.476 [2024-07-25 12:44:40.753906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.476 [2024-07-25 12:44:40.754426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.476 [2024-07-25 12:44:40.754441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.476 [2024-07-25 12:44:40.754448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.476 [2024-07-25 12:44:40.754652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.476 [2024-07-25 12:44:40.754852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.476 [2024-07-25 12:44:40.754862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.476 [2024-07-25 12:44:40.754868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.476 [2024-07-25 12:44:40.758094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.476 [2024-07-25 12:44:40.767432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.476 [2024-07-25 12:44:40.767977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.476 [2024-07-25 12:44:40.767993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.476 [2024-07-25 12:44:40.768000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.476 [2024-07-25 12:44:40.768200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.476 [2024-07-25 12:44:40.768399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.476 [2024-07-25 12:44:40.768407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.476 [2024-07-25 12:44:40.768414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.476 [2024-07-25 12:44:40.771638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.476 [2024-07-25 12:44:40.780946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.476 [2024-07-25 12:44:40.781470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.476 [2024-07-25 12:44:40.781485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.476 [2024-07-25 12:44:40.781496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.476 [2024-07-25 12:44:40.781701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.476 [2024-07-25 12:44:40.781901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.476 [2024-07-25 12:44:40.781910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.476 [2024-07-25 12:44:40.781917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.476 [2024-07-25 12:44:40.785135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.476 [2024-07-25 12:44:40.794446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.476 [2024-07-25 12:44:40.794950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.476 [2024-07-25 12:44:40.794965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.476 [2024-07-25 12:44:40.794974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.476 [2024-07-25 12:44:40.795174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.476 [2024-07-25 12:44:40.795375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.476 [2024-07-25 12:44:40.795384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.476 [2024-07-25 12:44:40.795390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.476 [2024-07-25 12:44:40.798619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 611770 Killed "${NVMF_APP[@]}" "$@" 00:32:07.476 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:07.476 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:07.476 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:07.476 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:07.476 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:07.476 [2024-07-25 12:44:40.807941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.476 [2024-07-25 12:44:40.808407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.476 [2024-07-25 12:44:40.808422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.476 [2024-07-25 12:44:40.808429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.477 [2024-07-25 12:44:40.808633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.477 [2024-07-25 12:44:40.808833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.477 [2024-07-25 12:44:40.808842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.477 [2024-07-25 12:44:40.808849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.477 [2024-07-25 12:44:40.812069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.477 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=613050 00:32:07.477 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 613050 00:32:07.477 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:07.477 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 613050 ']' 00:32:07.477 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.477 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:07.477 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.477 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:07.477 12:44:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:07.477 [2024-07-25 12:44:40.821585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.477 [2024-07-25 12:44:40.822115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.477 [2024-07-25 12:44:40.822130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.477 [2024-07-25 12:44:40.822137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.477 [2024-07-25 12:44:40.822337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.477 [2024-07-25 12:44:40.822536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.477 [2024-07-25 12:44:40.822545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.477 [2024-07-25 12:44:40.822558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.477 [2024-07-25 12:44:40.825781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.477 [2024-07-25 12:44:40.835099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.477 [2024-07-25 12:44:40.835592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.477 [2024-07-25 12:44:40.835608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.477 [2024-07-25 12:44:40.835615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.477 [2024-07-25 12:44:40.835814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.477 [2024-07-25 12:44:40.836014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.477 [2024-07-25 12:44:40.836023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.477 [2024-07-25 12:44:40.836029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.477 [2024-07-25 12:44:40.839253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.477 [2024-07-25 12:44:40.848573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.477 [2024-07-25 12:44:40.848976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.477 [2024-07-25 12:44:40.848993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.477 [2024-07-25 12:44:40.849000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.477 [2024-07-25 12:44:40.849201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.477 [2024-07-25 12:44:40.849401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.477 [2024-07-25 12:44:40.849414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.477 [2024-07-25 12:44:40.849421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.477 [2024-07-25 12:44:40.852651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.477 [2024-07-25 12:44:40.861004] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:32:07.477 [2024-07-25 12:44:40.861049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.477 [2024-07-25 12:44:40.862166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.477 [2024-07-25 12:44:40.862670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.477 [2024-07-25 12:44:40.862687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.477 [2024-07-25 12:44:40.862694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.477 [2024-07-25 12:44:40.862893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.477 [2024-07-25 12:44:40.863093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.477 [2024-07-25 12:44:40.863102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.477 [2024-07-25 12:44:40.863109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.477 [2024-07-25 12:44:40.866330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.477 [2024-07-25 12:44:40.875645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.477 [2024-07-25 12:44:40.876176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.477 [2024-07-25 12:44:40.876192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.477 [2024-07-25 12:44:40.876200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.477 [2024-07-25 12:44:40.876399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.477 [2024-07-25 12:44:40.876605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.477 [2024-07-25 12:44:40.876614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.477 [2024-07-25 12:44:40.876620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.477 [2024-07-25 12:44:40.879846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.477 [2024-07-25 12:44:40.889162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.477 [2024-07-25 12:44:40.889701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.477 [2024-07-25 12:44:40.889717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.477 [2024-07-25 12:44:40.889724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.477 [2024-07-25 12:44:40.889923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.477 [2024-07-25 12:44:40.890123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.477 [2024-07-25 12:44:40.890132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.477 [2024-07-25 12:44:40.890146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.477 [2024-07-25 12:44:40.893370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.739 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.739 [2024-07-25 12:44:40.902782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.739 [2024-07-25 12:44:40.903315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.739 [2024-07-25 12:44:40.903332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.739 [2024-07-25 12:44:40.903339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.739 [2024-07-25 12:44:40.903538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.739 [2024-07-25 12:44:40.903744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.739 [2024-07-25 12:44:40.903754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.739 [2024-07-25 12:44:40.903761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.739 [2024-07-25 12:44:40.906983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.739 [2024-07-25 12:44:40.916300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.739 [2024-07-25 12:44:40.916881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.739 [2024-07-25 12:44:40.916917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.739 [2024-07-25 12:44:40.916929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.739 [2024-07-25 12:44:40.917148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.739 [2024-07-25 12:44:40.917351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.739 [2024-07-25 12:44:40.917360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.739 [2024-07-25 12:44:40.917367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.739 [2024-07-25 12:44:40.920610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.739 [2024-07-25 12:44:40.929923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.739 [2024-07-25 12:44:40.930466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.739 [2024-07-25 12:44:40.930484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.739 [2024-07-25 12:44:40.930492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.739 [2024-07-25 12:44:40.930698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.739 [2024-07-25 12:44:40.930898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.739 [2024-07-25 12:44:40.930907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.739 [2024-07-25 12:44:40.930914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.739 [2024-07-25 12:44:40.934133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.739 [2024-07-25 12:44:40.943438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.739 [2024-07-25 12:44:40.943967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.739 [2024-07-25 12:44:40.943970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:07.739 [2024-07-25 12:44:40.943983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.739 [2024-07-25 12:44:40.943991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.739 [2024-07-25 12:44:40.944191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.739 [2024-07-25 12:44:40.944392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.739 [2024-07-25 12:44:40.944400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.739 [2024-07-25 12:44:40.944407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.739 [2024-07-25 12:44:40.947636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.739 [2024-07-25 12:44:40.956957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.739 [2024-07-25 12:44:40.957507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.739 [2024-07-25 12:44:40.957523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.739 [2024-07-25 12:44:40.957530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.739 [2024-07-25 12:44:40.957734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.739 [2024-07-25 12:44:40.957935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.739 [2024-07-25 12:44:40.957945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.739 [2024-07-25 12:44:40.957951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.739 [2024-07-25 12:44:40.961170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.739 [2024-07-25 12:44:40.970523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.739 [2024-07-25 12:44:40.971068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.739 [2024-07-25 12:44:40.971084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.739 [2024-07-25 12:44:40.971092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.739 [2024-07-25 12:44:40.971292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.739 [2024-07-25 12:44:40.971494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.739 [2024-07-25 12:44:40.971504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.739 [2024-07-25 12:44:40.971510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.739 [2024-07-25 12:44:40.974735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.739 [2024-07-25 12:44:40.984052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.739 [2024-07-25 12:44:40.984557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.739 [2024-07-25 12:44:40.984574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.739 [2024-07-25 12:44:40.984582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.739 [2024-07-25 12:44:40.984787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.739 [2024-07-25 12:44:40.984989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.739 [2024-07-25 12:44:40.984997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.739 [2024-07-25 12:44:40.985004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.739 [2024-07-25 12:44:40.988227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.739 [2024-07-25 12:44:40.997532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.739 [2024-07-25 12:44:40.998169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.739 [2024-07-25 12:44:40.998208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.739 [2024-07-25 12:44:40.998219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.739 [2024-07-25 12:44:40.998443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.739 [2024-07-25 12:44:40.998655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.739 [2024-07-25 12:44:40.998665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.739 [2024-07-25 12:44:40.998673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.739 [2024-07-25 12:44:41.001899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.740 [2024-07-25 12:44:41.011026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.740 [2024-07-25 12:44:41.011677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.740 [2024-07-25 12:44:41.011713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.740 [2024-07-25 12:44:41.011723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.740 [2024-07-25 12:44:41.011944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.740 [2024-07-25 12:44:41.012147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.740 [2024-07-25 12:44:41.012157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.740 [2024-07-25 12:44:41.012164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.740 [2024-07-25 12:44:41.015396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.740 [2024-07-25 12:44:41.021742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.740 [2024-07-25 12:44:41.021775] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.740 [2024-07-25 12:44:41.021785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.740 [2024-07-25 12:44:41.021794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.740 [2024-07-25 12:44:41.021802] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.740 [2024-07-25 12:44:41.021940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:07.740 [2024-07-25 12:44:41.022061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:07.740 [2024-07-25 12:44:41.022062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.740 [2024-07-25 12:44:41.024539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.740 [2024-07-25 12:44:41.025174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.740 [2024-07-25 12:44:41.025209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.740 [2024-07-25 12:44:41.025219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.740 [2024-07-25 12:44:41.025440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.740 [2024-07-25 12:44:41.025650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.740 [2024-07-25 12:44:41.025660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.740 [2024-07-25 12:44:41.025668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.740 [2024-07-25 12:44:41.028914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.740 [2024-07-25 12:44:41.038048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.740 [2024-07-25 12:44:41.038618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.740 [2024-07-25 12:44:41.038638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.740 [2024-07-25 12:44:41.038646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.740 [2024-07-25 12:44:41.038848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.740 [2024-07-25 12:44:41.039048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.740 [2024-07-25 12:44:41.039057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.740 [2024-07-25 12:44:41.039064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.740 [2024-07-25 12:44:41.042290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.740 [2024-07-25 12:44:41.051613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.740 [2024-07-25 12:44:41.052156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.740 [2024-07-25 12:44:41.052173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.740 [2024-07-25 12:44:41.052181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.740 [2024-07-25 12:44:41.052381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.740 [2024-07-25 12:44:41.052586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.740 [2024-07-25 12:44:41.052595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.740 [2024-07-25 12:44:41.052602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.740 [2024-07-25 12:44:41.055839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.740 [2024-07-25 12:44:41.065151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.740 [2024-07-25 12:44:41.065828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.740 [2024-07-25 12:44:41.065867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.740 [2024-07-25 12:44:41.065879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.740 [2024-07-25 12:44:41.066109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.740 [2024-07-25 12:44:41.066313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.740 [2024-07-25 12:44:41.066322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.740 [2024-07-25 12:44:41.066329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.740 [2024-07-25 12:44:41.069566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.740 [2024-07-25 12:44:41.078694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.740 [2024-07-25 12:44:41.079331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.740 [2024-07-25 12:44:41.079367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.740 [2024-07-25 12:44:41.079377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.740 [2024-07-25 12:44:41.079606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.740 [2024-07-25 12:44:41.079810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.740 [2024-07-25 12:44:41.079819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.740 [2024-07-25 12:44:41.079827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.740 [2024-07-25 12:44:41.083056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.740 [2024-07-25 12:44:41.092182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.740 [2024-07-25 12:44:41.092833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.740 [2024-07-25 12:44:41.092868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.740 [2024-07-25 12:44:41.092879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.740 [2024-07-25 12:44:41.093097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.740 [2024-07-25 12:44:41.093300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.740 [2024-07-25 12:44:41.093309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.740 [2024-07-25 12:44:41.093316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.740 [2024-07-25 12:44:41.096553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.740 [2024-07-25 12:44:41.105680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.740 [2024-07-25 12:44:41.106302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.740 [2024-07-25 12:44:41.106337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.740 [2024-07-25 12:44:41.106348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.740 [2024-07-25 12:44:41.106574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.740 [2024-07-25 12:44:41.106778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.740 [2024-07-25 12:44:41.106788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.740 [2024-07-25 12:44:41.106800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.740 [2024-07-25 12:44:41.110028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.740 [2024-07-25 12:44:41.119172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.740 [2024-07-25 12:44:41.119764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.740 [2024-07-25 12:44:41.119799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.740 [2024-07-25 12:44:41.119810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.740 [2024-07-25 12:44:41.120028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.740 [2024-07-25 12:44:41.120232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.740 [2024-07-25 12:44:41.120241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.740 [2024-07-25 12:44:41.120249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.740 [2024-07-25 12:44:41.123479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.741 [2024-07-25 12:44:41.132800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.741 [2024-07-25 12:44:41.133312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.741 [2024-07-25 12:44:41.133330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.741 [2024-07-25 12:44:41.133337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.741 [2024-07-25 12:44:41.133537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.741 [2024-07-25 12:44:41.133745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.741 [2024-07-25 12:44:41.133755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.741 [2024-07-25 12:44:41.133762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.741 [2024-07-25 12:44:41.136985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.741 [2024-07-25 12:44:41.146293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:07.741 [2024-07-25 12:44:41.146894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.741 [2024-07-25 12:44:41.146930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:07.741 [2024-07-25 12:44:41.146941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:07.741 [2024-07-25 12:44:41.147160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:07.741 [2024-07-25 12:44:41.147363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:07.741 [2024-07-25 12:44:41.147372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:07.741 [2024-07-25 12:44:41.147380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:07.741 [2024-07-25 12:44:41.150613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.003 [2024-07-25 12:44:41.159773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.003 [2024-07-25 12:44:41.160320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.003 [2024-07-25 12:44:41.160356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.003 [2024-07-25 12:44:41.160366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.003 [2024-07-25 12:44:41.160591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.003 [2024-07-25 12:44:41.160795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.003 [2024-07-25 12:44:41.160805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.003 [2024-07-25 12:44:41.160812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.003 [2024-07-25 12:44:41.164039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.003 [2024-07-25 12:44:41.173355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.003 [2024-07-25 12:44:41.173892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.003 [2024-07-25 12:44:41.173910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.003 [2024-07-25 12:44:41.173917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.003 [2024-07-25 12:44:41.174118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.003 [2024-07-25 12:44:41.174318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.003 [2024-07-25 12:44:41.174327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.003 [2024-07-25 12:44:41.174334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.003 [2024-07-25 12:44:41.177607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.003 [2024-07-25 12:44:41.186931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.003 [2024-07-25 12:44:41.187515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.003 [2024-07-25 12:44:41.187558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.003 [2024-07-25 12:44:41.187570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.003 [2024-07-25 12:44:41.187788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.003 [2024-07-25 12:44:41.187991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.003 [2024-07-25 12:44:41.188000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.003 [2024-07-25 12:44:41.188007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.003 [2024-07-25 12:44:41.191237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.003 [2024-07-25 12:44:41.200557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.003 [2024-07-25 12:44:41.201072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.003 [2024-07-25 12:44:41.201107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.003 [2024-07-25 12:44:41.201118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.003 [2024-07-25 12:44:41.201338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.003 [2024-07-25 12:44:41.201554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.003 [2024-07-25 12:44:41.201564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.003 [2024-07-25 12:44:41.201571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.003 [2024-07-25 12:44:41.204800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.003 [2024-07-25 12:44:41.214115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.003 [2024-07-25 12:44:41.214559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.003 [2024-07-25 12:44:41.214578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.003 [2024-07-25 12:44:41.214586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.003 [2024-07-25 12:44:41.214786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.003 [2024-07-25 12:44:41.214986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.003 [2024-07-25 12:44:41.214995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.003 [2024-07-25 12:44:41.215002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.003 [2024-07-25 12:44:41.218254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.003 [2024-07-25 12:44:41.227583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.003 [2024-07-25 12:44:41.228212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.003 [2024-07-25 12:44:41.228247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.003 [2024-07-25 12:44:41.228258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.003 [2024-07-25 12:44:41.228478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.004 [2024-07-25 12:44:41.228688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.004 [2024-07-25 12:44:41.228699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.004 [2024-07-25 12:44:41.228706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.004 [2024-07-25 12:44:41.231935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.004 [2024-07-25 12:44:41.241064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.004 [2024-07-25 12:44:41.241601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.004 [2024-07-25 12:44:41.241620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.004 [2024-07-25 12:44:41.241628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.004 [2024-07-25 12:44:41.241828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.004 [2024-07-25 12:44:41.242028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.004 [2024-07-25 12:44:41.242038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.004 [2024-07-25 12:44:41.242044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.004 [2024-07-25 12:44:41.245274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.004 [2024-07-25 12:44:41.254588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.004 [2024-07-25 12:44:41.255219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.004 [2024-07-25 12:44:41.255254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.004 [2024-07-25 12:44:41.255264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.004 [2024-07-25 12:44:41.255482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.004 [2024-07-25 12:44:41.255702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.004 [2024-07-25 12:44:41.255713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.004 [2024-07-25 12:44:41.255720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.004 [2024-07-25 12:44:41.258948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.004 [2024-07-25 12:44:41.268070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.004 [2024-07-25 12:44:41.268661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.004 [2024-07-25 12:44:41.268697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.004 [2024-07-25 12:44:41.268707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.004 [2024-07-25 12:44:41.268926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.004 [2024-07-25 12:44:41.269129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.004 [2024-07-25 12:44:41.269138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.004 [2024-07-25 12:44:41.269145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.004 [2024-07-25 12:44:41.272379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.004 [2024-07-25 12:44:41.281699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.004 [2024-07-25 12:44:41.282094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.004 [2024-07-25 12:44:41.282112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.004 [2024-07-25 12:44:41.282119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.004 [2024-07-25 12:44:41.282319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.004 [2024-07-25 12:44:41.282519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.004 [2024-07-25 12:44:41.282528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.004 [2024-07-25 12:44:41.282535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.004 [2024-07-25 12:44:41.285762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.004 [2024-07-25 12:44:41.295261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.004 [2024-07-25 12:44:41.295855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.004 [2024-07-25 12:44:41.295891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.004 [2024-07-25 12:44:41.295905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.004 [2024-07-25 12:44:41.296124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.004 [2024-07-25 12:44:41.296328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.004 [2024-07-25 12:44:41.296337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.004 [2024-07-25 12:44:41.296344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.004 [2024-07-25 12:44:41.299579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.004 [2024-07-25 12:44:41.308896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.004 [2024-07-25 12:44:41.309293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.004 [2024-07-25 12:44:41.309311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.004 [2024-07-25 12:44:41.309319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.004 [2024-07-25 12:44:41.309520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.004 [2024-07-25 12:44:41.309725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.004 [2024-07-25 12:44:41.309734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.004 [2024-07-25 12:44:41.309741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.004 [2024-07-25 12:44:41.312962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.004 [2024-07-25 12:44:41.322464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.004 [2024-07-25 12:44:41.323095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.004 [2024-07-25 12:44:41.323130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.004 [2024-07-25 12:44:41.323140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.004 [2024-07-25 12:44:41.323358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.004 [2024-07-25 12:44:41.323569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.004 [2024-07-25 12:44:41.323579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.004 [2024-07-25 12:44:41.323587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.004 [2024-07-25 12:44:41.326812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.004 [2024-07-25 12:44:41.335942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.004 [2024-07-25 12:44:41.336603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.004 [2024-07-25 12:44:41.336639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.004 [2024-07-25 12:44:41.336649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.004 [2024-07-25 12:44:41.336867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.004 [2024-07-25 12:44:41.337070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.004 [2024-07-25 12:44:41.337084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.004 [2024-07-25 12:44:41.337091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.004 [2024-07-25 12:44:41.340324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.004 [2024-07-25 12:44:41.349449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.004 [2024-07-25 12:44:41.350011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.004 [2024-07-25 12:44:41.350048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.004 [2024-07-25 12:44:41.350058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.004 [2024-07-25 12:44:41.350276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.004 [2024-07-25 12:44:41.350480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.004 [2024-07-25 12:44:41.350489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.004 [2024-07-25 12:44:41.350496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.004 [2024-07-25 12:44:41.353730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.004 [2024-07-25 12:44:41.363052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.004 [2024-07-25 12:44:41.363752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.004 [2024-07-25 12:44:41.363787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.005 [2024-07-25 12:44:41.363798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.005 [2024-07-25 12:44:41.364016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.005 [2024-07-25 12:44:41.364220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.005 [2024-07-25 12:44:41.364229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.005 [2024-07-25 12:44:41.364236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.005 [2024-07-25 12:44:41.367466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.005 [2024-07-25 12:44:41.376629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.005 [2024-07-25 12:44:41.377123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.005 [2024-07-25 12:44:41.377158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.005 [2024-07-25 12:44:41.377168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.005 [2024-07-25 12:44:41.377386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.005 [2024-07-25 12:44:41.377599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.005 [2024-07-25 12:44:41.377610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.005 [2024-07-25 12:44:41.377617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.005 [2024-07-25 12:44:41.380847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.005 [2024-07-25 12:44:41.390194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.005 [2024-07-25 12:44:41.390617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.005 [2024-07-25 12:44:41.390635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.005 [2024-07-25 12:44:41.390643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.005 [2024-07-25 12:44:41.390843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.005 [2024-07-25 12:44:41.391043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.005 [2024-07-25 12:44:41.391051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.005 [2024-07-25 12:44:41.391058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.005 [2024-07-25 12:44:41.394283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.005 [2024-07-25 12:44:41.403789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.005 [2024-07-25 12:44:41.404419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.005 [2024-07-25 12:44:41.404456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.005 [2024-07-25 12:44:41.404466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.005 [2024-07-25 12:44:41.404691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.005 [2024-07-25 12:44:41.404895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.005 [2024-07-25 12:44:41.404904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.005 [2024-07-25 12:44:41.404911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.005 [2024-07-25 12:44:41.408137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.005 [2024-07-25 12:44:41.417260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.005 [2024-07-25 12:44:41.417777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.005 [2024-07-25 12:44:41.417797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.005 [2024-07-25 12:44:41.417805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.005 [2024-07-25 12:44:41.418005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.005 [2024-07-25 12:44:41.418205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.005 [2024-07-25 12:44:41.418214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.005 [2024-07-25 12:44:41.418221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.005 [2024-07-25 12:44:41.421451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.267 [2024-07-25 12:44:41.430767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.267 [2024-07-25 12:44:41.431243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.267 [2024-07-25 12:44:41.431279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.267 [2024-07-25 12:44:41.431289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.267 [2024-07-25 12:44:41.431512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.267 [2024-07-25 12:44:41.431725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.267 [2024-07-25 12:44:41.431735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.267 [2024-07-25 12:44:41.431742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.267 [2024-07-25 12:44:41.434972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.267 [2024-07-25 12:44:41.444286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.267 [2024-07-25 12:44:41.444885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.267 [2024-07-25 12:44:41.444921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.267 [2024-07-25 12:44:41.444930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.267 [2024-07-25 12:44:41.445149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.267 [2024-07-25 12:44:41.445353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.267 [2024-07-25 12:44:41.445362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.267 [2024-07-25 12:44:41.445369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.267 [2024-07-25 12:44:41.448604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.267 [2024-07-25 12:44:41.457925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.267 [2024-07-25 12:44:41.458474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.267 [2024-07-25 12:44:41.458492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.267 [2024-07-25 12:44:41.458499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.267 [2024-07-25 12:44:41.458705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.267 [2024-07-25 12:44:41.458906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.267 [2024-07-25 12:44:41.458915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.267 [2024-07-25 12:44:41.458922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.267 [2024-07-25 12:44:41.462143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.267 [2024-07-25 12:44:41.471450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.267 [2024-07-25 12:44:41.472068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.267 [2024-07-25 12:44:41.472103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.267 [2024-07-25 12:44:41.472113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.267 [2024-07-25 12:44:41.472332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.267 [2024-07-25 12:44:41.472535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.267 [2024-07-25 12:44:41.472545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.267 [2024-07-25 12:44:41.472574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.267 [2024-07-25 12:44:41.475805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.267 [2024-07-25 12:44:41.484934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.267 [2024-07-25 12:44:41.485469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.267 [2024-07-25 12:44:41.485488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.267 [2024-07-25 12:44:41.485496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.267 [2024-07-25 12:44:41.485701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.267 [2024-07-25 12:44:41.485902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.267 [2024-07-25 12:44:41.485911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.267 [2024-07-25 12:44:41.485917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.267 [2024-07-25 12:44:41.489136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.267 [2024-07-25 12:44:41.498447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.267 [2024-07-25 12:44:41.499087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.267 [2024-07-25 12:44:41.499122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.267 [2024-07-25 12:44:41.499132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.267 [2024-07-25 12:44:41.499351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.267 [2024-07-25 12:44:41.499562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.267 [2024-07-25 12:44:41.499571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.267 [2024-07-25 12:44:41.499579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.267 [2024-07-25 12:44:41.502804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.267 [2024-07-25 12:44:41.511938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.267 [2024-07-25 12:44:41.512448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.267 [2024-07-25 12:44:41.512484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.267 [2024-07-25 12:44:41.512495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.267 [2024-07-25 12:44:41.512720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.267 [2024-07-25 12:44:41.512924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.267 [2024-07-25 12:44:41.512936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.267 [2024-07-25 12:44:41.512943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.267 [2024-07-25 12:44:41.516173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.267 [2024-07-25 12:44:41.525500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.267 [2024-07-25 12:44:41.525811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.268 [2024-07-25 12:44:41.525836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.268 [2024-07-25 12:44:41.525844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.268 [2024-07-25 12:44:41.526050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.268 [2024-07-25 12:44:41.526251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.268 [2024-07-25 12:44:41.526260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.268 [2024-07-25 12:44:41.526267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.268 [2024-07-25 12:44:41.529494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.268 [2024-07-25 12:44:41.539001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.268 [2024-07-25 12:44:41.539622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.268 [2024-07-25 12:44:41.539658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.268 [2024-07-25 12:44:41.539669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.268 [2024-07-25 12:44:41.539890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.268 [2024-07-25 12:44:41.540094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.268 [2024-07-25 12:44:41.540104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.268 [2024-07-25 12:44:41.540111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.268 [2024-07-25 12:44:41.543344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.268 [2024-07-25 12:44:41.552473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.268 [2024-07-25 12:44:41.552996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.268 [2024-07-25 12:44:41.553014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.268 [2024-07-25 12:44:41.553022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.268 [2024-07-25 12:44:41.553222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.268 [2024-07-25 12:44:41.553421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.268 [2024-07-25 12:44:41.553430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.268 [2024-07-25 12:44:41.553436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.268 [2024-07-25 12:44:41.556673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.268 [2024-07-25 12:44:41.565984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.268 [2024-07-25 12:44:41.566620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.268 [2024-07-25 12:44:41.566655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.268 [2024-07-25 12:44:41.566667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.268 [2024-07-25 12:44:41.566886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.268 [2024-07-25 12:44:41.567094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.268 [2024-07-25 12:44:41.567104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.268 [2024-07-25 12:44:41.567111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.268 [2024-07-25 12:44:41.570345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.268 [2024-07-25 12:44:41.579480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.268 [2024-07-25 12:44:41.580086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.268 [2024-07-25 12:44:41.580122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.268 [2024-07-25 12:44:41.580132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.268 [2024-07-25 12:44:41.580350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.268 [2024-07-25 12:44:41.580561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.268 [2024-07-25 12:44:41.580572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.268 [2024-07-25 12:44:41.580579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.268 [2024-07-25 12:44:41.583807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.268 [2024-07-25 12:44:41.592962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.268 [2024-07-25 12:44:41.593595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.268 [2024-07-25 12:44:41.593631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.268 [2024-07-25 12:44:41.593643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.268 [2024-07-25 12:44:41.593863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.268 [2024-07-25 12:44:41.594066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.268 [2024-07-25 12:44:41.594076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.268 [2024-07-25 12:44:41.594083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.268 [2024-07-25 12:44:41.597313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.268 [2024-07-25 12:44:41.606435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.268 [2024-07-25 12:44:41.607075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.268 [2024-07-25 12:44:41.607111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.268 [2024-07-25 12:44:41.607121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.268 [2024-07-25 12:44:41.607339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.268 [2024-07-25 12:44:41.607543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.268 [2024-07-25 12:44:41.607558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.268 [2024-07-25 12:44:41.607566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.268 [2024-07-25 12:44:41.610802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.268 [2024-07-25 12:44:41.619927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.268 [2024-07-25 12:44:41.620532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.268 [2024-07-25 12:44:41.620575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.268 [2024-07-25 12:44:41.620588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.268 [2024-07-25 12:44:41.620807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.268 [2024-07-25 12:44:41.621020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.268 [2024-07-25 12:44:41.621030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.268 [2024-07-25 12:44:41.621038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.268 [2024-07-25 12:44:41.624264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.268 [2024-07-25 12:44:41.633388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.268 [2024-07-25 12:44:41.633990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.268 [2024-07-25 12:44:41.634025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.268 [2024-07-25 12:44:41.634036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.268 [2024-07-25 12:44:41.634255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.268 [2024-07-25 12:44:41.634459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.268 [2024-07-25 12:44:41.634468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.268 [2024-07-25 12:44:41.634475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.268 [2024-07-25 12:44:41.637708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.268 [2024-07-25 12:44:41.647020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.268 [2024-07-25 12:44:41.647574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.269 [2024-07-25 12:44:41.647593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.269 [2024-07-25 12:44:41.647601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.269 [2024-07-25 12:44:41.647802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.269 [2024-07-25 12:44:41.648003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.269 [2024-07-25 12:44:41.648013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.269 [2024-07-25 12:44:41.648020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.269 [2024-07-25 12:44:41.651243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.269 [2024-07-25 12:44:41.660572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.269 [2024-07-25 12:44:41.661075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.269 [2024-07-25 12:44:41.661092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.269 [2024-07-25 12:44:41.661103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.269 [2024-07-25 12:44:41.661303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.269 [2024-07-25 12:44:41.661504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.269 [2024-07-25 12:44:41.661513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.269 [2024-07-25 12:44:41.661520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.269 [2024-07-25 12:44:41.664745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.269 [2024-07-25 12:44:41.674050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.269 [2024-07-25 12:44:41.674532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.269 [2024-07-25 12:44:41.674552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.269 [2024-07-25 12:44:41.674560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.269 [2024-07-25 12:44:41.674759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.269 [2024-07-25 12:44:41.674959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.269 [2024-07-25 12:44:41.674968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.269 [2024-07-25 12:44:41.674975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.269 [2024-07-25 12:44:41.678193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.530 [2024-07-25 12:44:41.687496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.531 [2024-07-25 12:44:41.688029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.531 [2024-07-25 12:44:41.688044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.531 [2024-07-25 12:44:41.688050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.531 [2024-07-25 12:44:41.688249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.531 [2024-07-25 12:44:41.688450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.531 [2024-07-25 12:44:41.688459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.531 [2024-07-25 12:44:41.688465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.531 [2024-07-25 12:44:41.691690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.531 [2024-07-25 12:44:41.700990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.531 [2024-07-25 12:44:41.701627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.531 [2024-07-25 12:44:41.701662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.531 [2024-07-25 12:44:41.701672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.531 [2024-07-25 12:44:41.701891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.531 [2024-07-25 12:44:41.702094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.531 [2024-07-25 12:44:41.702107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.531 [2024-07-25 12:44:41.702114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.531 [2024-07-25 12:44:41.705343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.531 [2024-07-25 12:44:41.714470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:32:08.531 [2024-07-25 12:44:41.714959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.531 [2024-07-25 12:44:41.714994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.531 [2024-07-25 12:44:41.715004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:08.531 [2024-07-25 12:44:41.715222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.531 [2024-07-25 12:44:41.715426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.531 [2024-07-25 12:44:41.715435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:08.531 [2024-07-25 12:44:41.715442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:08.531 [2024-07-25 12:44:41.718676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.531 [2024-07-25 12:44:41.728000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.531 [2024-07-25 12:44:41.728562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.531 [2024-07-25 12:44:41.728598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.531 [2024-07-25 12:44:41.728609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.531 [2024-07-25 12:44:41.728830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.531 [2024-07-25 12:44:41.729035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.531 [2024-07-25 12:44:41.729045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.531 [2024-07-25 12:44:41.729053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.531 [2024-07-25 12:44:41.732291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.531 [2024-07-25 12:44:41.741613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.531 [2024-07-25 12:44:41.742247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.531 [2024-07-25 12:44:41.742282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.531 [2024-07-25 12:44:41.742292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.531 [2024-07-25 12:44:41.742511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.531 [2024-07-25 12:44:41.742721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.531 [2024-07-25 12:44:41.742736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.531 [2024-07-25 12:44:41.742744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.531 [2024-07-25 12:44:41.745979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.531 [2024-07-25 12:44:41.755104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.531 [2024-07-25 12:44:41.755571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.531 [2024-07-25 12:44:41.755590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.531 [2024-07-25 12:44:41.755598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:08.531 [2024-07-25 12:44:41.755798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.531 [2024-07-25 12:44:41.755998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.531 [2024-07-25 12:44:41.756008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.531 [2024-07-25 12:44:41.756015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:08.531 [2024-07-25 12:44:41.759245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.531 [2024-07-25 12:44:41.762751] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.531 [2024-07-25 12:44:41.768553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.531 [2024-07-25 12:44:41.769182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.531 [2024-07-25 12:44:41.769217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.531 [2024-07-25 12:44:41.769227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.531 [2024-07-25 12:44:41.769445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.531 [2024-07-25 12:44:41.769655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.531 [2024-07-25 12:44:41.769666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.531 [2024-07-25 12:44:41.769673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.531 [2024-07-25 12:44:41.772898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.531 [2024-07-25 12:44:41.782015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.531 [2024-07-25 12:44:41.782567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.531 [2024-07-25 12:44:41.782585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.531 [2024-07-25 12:44:41.782593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.531 [2024-07-25 12:44:41.782794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.531 [2024-07-25 12:44:41.782999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.531 [2024-07-25 12:44:41.783008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.531 [2024-07-25 12:44:41.783015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.531 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:08.531 [2024-07-25 12:44:41.786239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.531 [2024-07-25 12:44:41.795552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.532 [2024-07-25 12:44:41.796050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.532 [2024-07-25 12:44:41.796066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.532 [2024-07-25 12:44:41.796073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.532 [2024-07-25 12:44:41.796272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.532 [2024-07-25 12:44:41.796473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.532 [2024-07-25 12:44:41.796482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.532 [2024-07-25 12:44:41.796488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.532 [2024-07-25 12:44:41.799712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.532 Malloc0 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:08.532 [2024-07-25 12:44:41.809061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.532 [2024-07-25 12:44:41.809447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.532 [2024-07-25 12:44:41.809465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.532 [2024-07-25 12:44:41.809472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.532 [2024-07-25 12:44:41.809677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.532 [2024-07-25 12:44:41.809879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.532 [2024-07-25 12:44:41.809888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.532 [2024-07-25 12:44:41.809895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.532 [2024-07-25 12:44:41.813117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:08.532 [2024-07-25 12:44:41.822626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.532 [2024-07-25 12:44:41.823247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.532 [2024-07-25 12:44:41.823284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66180 with addr=10.0.0.2, port=4420 00:32:08.532 [2024-07-25 12:44:41.823296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66180 is same with the state(5) to be set 00:32:08.532 [2024-07-25 12:44:41.823517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66180 (9): Bad file descriptor 00:32:08.532 [2024-07-25 12:44:41.823729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.532 [2024-07-25 12:44:41.823739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.532 [2024-07-25 12:44:41.823746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.532 [2024-07-25 12:44:41.826971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:08.532 [2024-07-25 12:44:41.836094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.532 [2024-07-25 12:44:41.836323] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.532 12:44:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 612111 00:32:08.532 [2024-07-25 12:44:41.869985] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:18.529 00:32:18.529 Latency(us) 00:32:18.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.529 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:18.529 Verification LBA range: start 0x0 length 0x4000 00:32:18.529 Nvme1n1 : 15.02 2999.87 11.72 10925.74 0.00 9168.34 1228.80 44564.48 00:32:18.529 =================================================================================================================== 00:32:18.529 Total : 2999.87 11.72 10925.74 0.00 9168.34 1228.80 44564.48 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.529 rmmod nvme_tcp 00:32:18.529 rmmod nvme_fabrics 00:32:18.529 rmmod nvme_keyring 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 613050 ']' 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 613050 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 613050 ']' 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 613050 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 613050 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 613050' 00:32:18.529 killing process with pid 613050 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 613050 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 613050 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.529 12:44:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.916 12:44:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:19.916 00:32:19.916 real 0m29.345s 00:32:19.916 user 1m4.190s 00:32:19.916 sys 0m8.271s 00:32:19.916 12:44:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:19.916 12:44:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:19.916 ************************************ 00:32:19.916 END TEST nvmf_bdevperf 00:32:19.916 ************************************ 00:32:19.916 12:44:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:19.916 12:44:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:19.916 12:44:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:19.916 12:44:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:19.916 12:44:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.916 ************************************ 00:32:19.916 START TEST nvmf_target_disconnect 00:32:19.916 ************************************ 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:19.916 * Looking for test storage... 00:32:19.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.916 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:32:19.917 12:44:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:28.077 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:28.077 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:28.077 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.077 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:28.077 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:28.078 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:28.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:32:28.356 00:32:28.356 --- 10.0.0.2 ping statistics --- 00:32:28.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.356 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:32:28.356 00:32:28.356 --- 10.0.0.1 ping statistics --- 00:32:28.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.356 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:28.356 ************************************ 00:32:28.356 START TEST nvmf_target_disconnect_tc1 00:32:28.356 ************************************ 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:32:28.356 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:28.356 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.356 [2024-07-25 12:45:01.750622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.357 [2024-07-25 12:45:01.750713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ec860 with addr=10.0.0.2, port=4420 00:32:28.357 [2024-07-25 12:45:01.750743] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:28.357 [2024-07-25 12:45:01.750763] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:28.357 [2024-07-25 12:45:01.750777] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:28.357 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:28.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:28.357 Initializing NVMe Controllers 00:32:28.357 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:32:28.357 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:28.357 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:28.357 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:28.357 00:32:28.357 real 0m0.135s 00:32:28.357 user 0m0.042s 00:32:28.357 sys 0m0.092s 00:32:28.357 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:28.357 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:28.357 ************************************ 00:32:28.357 END TEST nvmf_target_disconnect_tc1 00:32:28.357 ************************************ 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:28.667 ************************************ 00:32:28.667 START TEST nvmf_target_disconnect_tc2 00:32:28.667 ************************************ 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=619216 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 619216 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 619216 ']' 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:28.667 12:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:28.667 [2024-07-25 12:45:01.951103] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:32:28.667 [2024-07-25 12:45:01.951231] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.667 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.928 [2024-07-25 12:45:02.154566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:28.928 [2024-07-25 12:45:02.323211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.928 [2024-07-25 12:45:02.323300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.928 [2024-07-25 12:45:02.323328] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.928 [2024-07-25 12:45:02.323351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.928 [2024-07-25 12:45:02.323371] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.928 [2024-07-25 12:45:02.323587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:32:28.928 [2024-07-25 12:45:02.323723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:32:28.928 [2024-07-25 12:45:02.323880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:32:28.928 [2024-07-25 12:45:02.323889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:29.502 Malloc0 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:29.502 [2024-07-25 12:45:02.870975] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:29.502 [2024-07-25 12:45:02.911909] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.502 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:29.762 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.762 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=619539 00:32:29.762 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:29.762 12:45:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:29.762 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.684 12:45:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 619216 00:32:31.684 12:45:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 [2024-07-25 12:45:04.957160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Write completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 Read completed with error (sct=0, sc=8) 00:32:31.684 starting I/O failed 00:32:31.684 [2024-07-25 12:45:04.957526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:31.684 [2024-07-25 12:45:04.958053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.684 [2024-07-25 12:45:04.958437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.684 qpair failed and we were unable to recover it. 00:32:31.684 [2024-07-25 12:45:04.958834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.684 [2024-07-25 12:45:04.958899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.684 qpair failed and we were unable to recover it. 00:32:31.684 [2024-07-25 12:45:04.959187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.684 [2024-07-25 12:45:04.959200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.684 qpair failed and we were unable to recover it. 00:32:31.684 [2024-07-25 12:45:04.959334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.684 [2024-07-25 12:45:04.959346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.684 qpair failed and we were unable to recover it. 00:32:31.684 [2024-07-25 12:45:04.959653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.684 [2024-07-25 12:45:04.959666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.684 qpair failed and we were unable to recover it. 00:32:31.684 [2024-07-25 12:45:04.959896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.684 [2024-07-25 12:45:04.959908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.960121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.960136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.960465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.960477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.960792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.960806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.961120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.961133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.961372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.961387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.961619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.961633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.961956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.961968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.962169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.962187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.962535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.962573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.962959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.962972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.963308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.963322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.963623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.963637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.964030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.964044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.964390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.964403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.964632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.964646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.964869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.964883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.965203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.965216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.965575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.965593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.965912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.965928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.966233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.966247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.966583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.966597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.966927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.966941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.967323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.967337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.967625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.967639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.967971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.967985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.968291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.968304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.968635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.968649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.968982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.968996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.969315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.969329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.969699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.969713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.969908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.969924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.970153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.970167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.970499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.970514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.970913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.970928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.971252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.971267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.971600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.685 [2024-07-25 12:45:04.971613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.685 qpair failed and we were unable to recover it. 00:32:31.685 [2024-07-25 12:45:04.971861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.971874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.972214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.972228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.972555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.972568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.972924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.972938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.973242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.973257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.973594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.973609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.973926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.973941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.974155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.974169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.974520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.974536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.974767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.974782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.975104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.975117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.975451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.975466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.975686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.975700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.976024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.976038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.976367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.976381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.976647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.976661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.976961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.976977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.977212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.977226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.977551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.977566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.977948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.977962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.978270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.978284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.978583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.978597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.978923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.978938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.979271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.979285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.979627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.979640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.979953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.979966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.980278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.980292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.980600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.980616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.980846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.980863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.981137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.981156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.981466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.981482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.981780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.981796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.982118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.982135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.982457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.982473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.982776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.982791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.983110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.983124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.686 [2024-07-25 12:45:04.983348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.686 [2024-07-25 12:45:04.983365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.686 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.983695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.983711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.983965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.983979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.984259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.984273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.984613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.984630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.984958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.984974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.985294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.985311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.985624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.985640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.985951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.985966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.986288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.986303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.986622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.986638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.986961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.986977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.987294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.987308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.987630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.987649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.987973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.987990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.988311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.988327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.988576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.988591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.988798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.988815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.989145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.989161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.989478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.989495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.989831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.989848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.990171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.990187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.990509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.990524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.990833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.990854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.991214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.991233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.991557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.991576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.991925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.991944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.992288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.992308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.992621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.992641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.992993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.993013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.993326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.993345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.993690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.993710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.994059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.994077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.994387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.994408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.994807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.994828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.995179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.995198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.995555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.995576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.687 qpair failed and we were unable to recover it. 00:32:31.687 [2024-07-25 12:45:04.995937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.687 [2024-07-25 12:45:04.995957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:04.996274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:04.996296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:04.996657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:04.996676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:04.997003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:04.997028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:04.997380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:04.997401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:04.997757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:04.997776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:04.998094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:04.998115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:04.998443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:04.998464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:04.998802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:04.998822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:04.999180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:04.999202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:04.999545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:04.999578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:04.999910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:04.999928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.000279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.000299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.000652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.000673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.000998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.001023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.001394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.001418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.001849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.001874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.002221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.002246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.002599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.002624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.002966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.002991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.003345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.003372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.003737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.003763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.004105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.004131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.004468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.004493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.004844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.004870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.005208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.005233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.005580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.005607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.005963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.005988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.006335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.006362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.006766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.006791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.007111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.007136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.007482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.007508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.007829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.007856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.008178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.008201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.008542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.008580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.688 qpair failed and we were unable to recover it. 00:32:31.688 [2024-07-25 12:45:05.008897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.688 [2024-07-25 12:45:05.008922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.009261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.009287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.009674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.009699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.010045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.010071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.010388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.010415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.010771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.010798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.011139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.011163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.011497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.011523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.011903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.011929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.012282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.012313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.012657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.012682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.013033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.013063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.013344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.013372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.013733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.013765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.014130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.014160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.014502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.014535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.014794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.014825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.015184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.015213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.015541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.015582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.015962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.015994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.016344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.016373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.016713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.016745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.017064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.017095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.017476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.017509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.017882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.017916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.018300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.018330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.018568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.018601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.018937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.018968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.019321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.019350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.019689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.019720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.020082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.020111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.689 [2024-07-25 12:45:05.020445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.689 [2024-07-25 12:45:05.020475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.689 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.020872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.020903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.021303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.021335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.021676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.021707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.022050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.022082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.022441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.022471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.022829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.022861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.023233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.023264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.023602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.023634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.024014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.024045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.024421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.024451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.024802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.024832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.025215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.025245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.025594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.025627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.025980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.026011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.026359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.026389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.026726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.026756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.027121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.027151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.027512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.027542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.027917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.027949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.028275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.028305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.028537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.028583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.029017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.029048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.029389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.029419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.029813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.029844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.030208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.030239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.030581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.030612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.030956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.030987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.031330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.031360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.031704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.031734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.032070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.032101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.032464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.032495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.032861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.032893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.033241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.033272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.033619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.033650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.033984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.034015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.034348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.690 [2024-07-25 12:45:05.034379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.690 qpair failed and we were unable to recover it. 00:32:31.690 [2024-07-25 12:45:05.034723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.034755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.035164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.035193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.035562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.035593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.035953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.035983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.036300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.036331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.036713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.036744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.037144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.037173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.037513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.037543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.037946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.037977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.038342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.038378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.038718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.038748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.039074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.039102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.039440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.039470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.039708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.039742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.040093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.040123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.040410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.040440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.040786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.040816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.041127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.041158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.041524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.041565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.041935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.041965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.042319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.042351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.042729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.042762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.043099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.043130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.043528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.043569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.043935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.043966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.044355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.044386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.044699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.044729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.045087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.045117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.045453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.045484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.045865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.045896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.046215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.691 [2024-07-25 12:45:05.046247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.691 qpair failed and we were unable to recover it. 00:32:31.691 [2024-07-25 12:45:05.046619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.046652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.046979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.047012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.047347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.047376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.047719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.047751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.048095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.048127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.048452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.048483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.048905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.048937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.049289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.049318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.049653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.049683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.050052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.050083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.050433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.050464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.050816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.050848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.051228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.051259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.051610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.051641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.052043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.052073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.052409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.052440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.052831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.052862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.053180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.053212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.053561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.053592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.053966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.054002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.054337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.054369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.054708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.054740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.055053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.055086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.055423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.055453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.055772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.055804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.056145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.056175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.056511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.056541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.056991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.057022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.057383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.057413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.057756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.057786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.058123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.058153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.058486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.058517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.058890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.058921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.059268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.059300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.059653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.059684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.060021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.060052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.060392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.692 [2024-07-25 12:45:05.060422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.692 qpair failed and we were unable to recover it. 00:32:31.692 [2024-07-25 12:45:05.060753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.060784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.061191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.061222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.061587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.061618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.061974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.062005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.062340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.062371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.062718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.062750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.063070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.063101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.063334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.063368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.063755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.063786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.064143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.064180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.064583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.064614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.064995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.065024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.065380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.065412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.065762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.065793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.066136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.066166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.066516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.066582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.066933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.066963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.067291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.067323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.067674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.067706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.068056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.068085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.068428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.068460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.068812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.068843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.069175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.069206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.069540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.069581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.069938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.069968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.070332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.070362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.070705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.070737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.071071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.071102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.071469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.071498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.071835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.071866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.072179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.072211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.072540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.072584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.072937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.072966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.073346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.073375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.073747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.073779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.074116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.074147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.074498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.693 [2024-07-25 12:45:05.074528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.693 qpair failed and we were unable to recover it. 00:32:31.693 [2024-07-25 12:45:05.074907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.074938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.075276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.075306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.075652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.075683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.076048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.076079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.076418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.076450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.076838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.076869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.077092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.077120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.077459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.077488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.077848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.077880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.078241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.078270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.078616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.078646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.078999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.079028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.079357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.079387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.079621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.079659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.080037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.080066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.080348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.080377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.080604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.080638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.080980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.081009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.081338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.081369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.081719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.081751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.082121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.082152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.082493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.082525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.082895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.082925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.083277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.083307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.083650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.083679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.084022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.084052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.084430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.084460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.084780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.084810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.085148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.085179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.085537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.085581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.085953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.085985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.086365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.086396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.086759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.086792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.087174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.087204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.087536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.087581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.088000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.088031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.088396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.694 [2024-07-25 12:45:05.088426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.694 qpair failed and we were unable to recover it. 00:32:31.694 [2024-07-25 12:45:05.088776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.088808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.089127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.089158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.089521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.089562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.089933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.089969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.090313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.090344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.090689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.090721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.091063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.091095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.091442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.091471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.091824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.091854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.092197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.092230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.092580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.092612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.092993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.093023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.093351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.093382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.093712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.093742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.094088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.094118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.094344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.094376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.094711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.094743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.095080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.095113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.095341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.095373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.095711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.095751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.695 [2024-07-25 12:45:05.096091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.695 [2024-07-25 12:45:05.096124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.695 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.096481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.096514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.096894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.096927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.097295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.097325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.097685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.097717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.098103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.098136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.098474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.098505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.098756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.098789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.099172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.099203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.099542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.099585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.099906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.099936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.100291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.100322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.100661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.100695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.101064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.101094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.101407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.101439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.101784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.101815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.102178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.102210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.102585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.102618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.102963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.102992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.103354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.103385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.103775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.103807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.104135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.104168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.104509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.104540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.104883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.104916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.105255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.105293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.105661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.105692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.106055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.106087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.967 [2024-07-25 12:45:05.106454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.967 [2024-07-25 12:45:05.106485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.967 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.106738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.106769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.107162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.107194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.107577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.107608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.107984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.108015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.108365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.108396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.108731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.108761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.109099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.109130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.109479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.109509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.109853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.109887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.110254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.110284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.110654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.110686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.111065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.111095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.111428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.111459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.111841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.111872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.112200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.112231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.112634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.112666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.113013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.113043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.113368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.113400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.113754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.113786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.114108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.114140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.114363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.114397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.114763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.114794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.115137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.115169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.115533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.115582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.115824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.115855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.116227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.116258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.116499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.116527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.116886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.116916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.117272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.117303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.117618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.117649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.117988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.118018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.118332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.118362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.118702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.118736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.119124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.119154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.119501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.119532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.119891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.968 [2024-07-25 12:45:05.119922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.968 qpair failed and we were unable to recover it. 00:32:31.968 [2024-07-25 12:45:05.120252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.120284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.120519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.120561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.120904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.120935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.121264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.121295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.121480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.121511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.121914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.121957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.122353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.122406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.123087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.123158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.123484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.123541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.123976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.124033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.124393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.124424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.124666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.124697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.125050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.125080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.125409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.125440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.125669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.125701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.126083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.126114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.126447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.126478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.126857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.126889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.127167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.127195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.127540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.127583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.127945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.127975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.128318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.128350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.128700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.128733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.129134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.129164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.129503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.129534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.129932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.129964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.130302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.130333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.130674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.130706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.130933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.130972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.131381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.131411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.131775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.131807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.132122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.132150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.132497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.132527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.132960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.132992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.133339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.133369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.133712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.133743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.134089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.969 [2024-07-25 12:45:05.134119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.969 qpair failed and we were unable to recover it. 00:32:31.969 [2024-07-25 12:45:05.134467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.134498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.134870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.134903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.135253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.135284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.135640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.135671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.136046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.136076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.136448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.136479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.136823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.136855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.137235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.137266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.137593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.137627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.137997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.138027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.138364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.138395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.138784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.138816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.139158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.139188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.139538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.139581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.140004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.140034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.140361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.140393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.140758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.140789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.141117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.141148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.141521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.141564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.141967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.141997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.142328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.142360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.142711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.142744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.143113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.143143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.143479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.143507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.143898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.143929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.144202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.144232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.144572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.144602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.144910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.144942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.145286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.145316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.145657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.145689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.146061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.146093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.146445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.146476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.146834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.146867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.147197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.147228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.147569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.147601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.147942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.147972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.970 qpair failed and we were unable to recover it. 00:32:31.970 [2024-07-25 12:45:05.148284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.970 [2024-07-25 12:45:05.148316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.148653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.148684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.149020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.149050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.149363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.149394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.149723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.149757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.150129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.150159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.150420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.150449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.150799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.150831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.151072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.151102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.151476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.151506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.151867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.151899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.152244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.152274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.152591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.152623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.152972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.153003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.153269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.153303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.153565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.153597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.153957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.153988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.154322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.154353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.154847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.154886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.155260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.155296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.155703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.155736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.156078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.156108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.156453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.156483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.156831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.156868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.157250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.157280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.157623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.157653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.158033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.158063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.158415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.158445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.158803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.158836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.159205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.159236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.159592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.159624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.159970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.160001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.160353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.160384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.160739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.160770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.161148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.161178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.971 qpair failed and we were unable to recover it. 00:32:31.971 [2024-07-25 12:45:05.161524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.971 [2024-07-25 12:45:05.161565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.161958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.161989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.162223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.162255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.162621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.162653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.162907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.162936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.163270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.163300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.163658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.163691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.164068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.164099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.164452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.164482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.164813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.164846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.165228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.165258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.165597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.165629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.165991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.166022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.166376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.166406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.166732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.166763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.167104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.167134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.167489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.167519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.167907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.167938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.168286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.168317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.168700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.168732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.169115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.169146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.169491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.169524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.169882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.169913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.170272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.170302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.170668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.170699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.171083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.171112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.171443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.171474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.171815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.171847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.172195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.172224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.172590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.172624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.172989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.972 [2024-07-25 12:45:05.173019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.972 qpair failed and we were unable to recover it. 00:32:31.972 [2024-07-25 12:45:05.173333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.173364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.173706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.173738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.174142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.174171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.174520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.174568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.174925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.174956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.175295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.175325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.175677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.175708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.176072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.176101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.176433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.176465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.176785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.176817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.177184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.177215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.177562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.177595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.177986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.178017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.178357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.178387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.178804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.178836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.179188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.179218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.179561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.179590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.179946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.179977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.180387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.180417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.180764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.180796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.181142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.181173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.181407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.181438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.181795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.181826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.182162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.182194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.182531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.182571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.182958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.182995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.183358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.183388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.183744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.183777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.184106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.184139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.184479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.184508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.184866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.184897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.185259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.185290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.185649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.185679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.186005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.186037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.186381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.186411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.186611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.186640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.973 [2024-07-25 12:45:05.186990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.973 [2024-07-25 12:45:05.187021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.973 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.187358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.187389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.187722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.187753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.188147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.188178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.188442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.188474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.188885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.188916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.189134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.189166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.189466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.189496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.189858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.189889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.190244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.190275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.190647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.190679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.191051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.191080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.191415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.191447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.191794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.191825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.192172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.192204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.192572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.192603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.192862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.192893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.193259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.193290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.193598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.193628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.194005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.194036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.194388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.194417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.194780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.194812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.195182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.195213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.195566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.195598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.195987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.196017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.196353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.196384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.196748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.196779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.197138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.197168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.197409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.197438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.197835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.197865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.198215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.198251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.198580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.198610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.198989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.199020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.199338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.199369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.199710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.199741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.200094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.200125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.200511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.200542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.974 qpair failed and we were unable to recover it. 00:32:31.974 [2024-07-25 12:45:05.200918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.974 [2024-07-25 12:45:05.200949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.201289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.201320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.201666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.201697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.202075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.202105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.202449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.202482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.202808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.202840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.203186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.203216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.203569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.203603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.203870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.203901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.204263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.204293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.204632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.204665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.205038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.205067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.205433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.205462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.205802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.205831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.206188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.206218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.206632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.206664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.206997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.207028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.207381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.207411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.207733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.207762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.208142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.208172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.208533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.208580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.208938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.208969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.209344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.209374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.209701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.209734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.209966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.209995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.210370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.210401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.210633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.210668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.211022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.211052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.211373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.211402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.211769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.211800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.212149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.212180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.212557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.212590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.212939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.212970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.213221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.213252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.213449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.213481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.213859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.213892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.214242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.214273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.975 qpair failed and we were unable to recover it. 00:32:31.975 [2024-07-25 12:45:05.214584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.975 [2024-07-25 12:45:05.214616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.214761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.214791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.215162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.215193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.215540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.215592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.215950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.215983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.216396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.216426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.216843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.216874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.217220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.217252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.217528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.217569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.217945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.217976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.218342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.218374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.218768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.218802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.219148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.219180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.219524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.219566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.219932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.219962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.220359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.220391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.220724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.220754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.221097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.221129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.221472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.221502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.221864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.221895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.222204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.222233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.222464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.222496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.222860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.222892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.223244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.223275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.223505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.223541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.223882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.223912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.224319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.224347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.224587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.224620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.224970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.225001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.225352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.225383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.225627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.225658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.976 [2024-07-25 12:45:05.226003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.976 [2024-07-25 12:45:05.226032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.976 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.226244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.226273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.226620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.226651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.226890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.226920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.227305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.227336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.227677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.227707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.228055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.228086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.228473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.228503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.228864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.228896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.229230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.229260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.229490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.229518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.229853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.229884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.230202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.230230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.230582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.230614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.230865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.230893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.231258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.231288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.231653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.231686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.231923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.231954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.232337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.232367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.232701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.232731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.233112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.233148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.233566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.233598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.233966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.233996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.234411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.234442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.234766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.234798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.235146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.235178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.235297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.235324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.235656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.235688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.235921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.235950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.236188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.236219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.236595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.236626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.236926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.236957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.237318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.237349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.237695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.237727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.238094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.238126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.238477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.238507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.238887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.977 [2024-07-25 12:45:05.238918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.977 qpair failed and we were unable to recover it. 00:32:31.977 [2024-07-25 12:45:05.239270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.239301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.239636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.239667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.239902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.239931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.240321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.240352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.240681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.240711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.241081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.241110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.241341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.241369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.241698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.241729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.242108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.242139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.242495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.242527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.242906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.242937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.243262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.243292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.243632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.243664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.244043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.244075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.244462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.244492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.244857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.244889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.245104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.245131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.245496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.245526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.245905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.245936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.246143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.246171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.246432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.246464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.246811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.246842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.247197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.247227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.247625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.247657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.248039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.248080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.248491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.248522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.248920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.248952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.249296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.249326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.249679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.249710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.250057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.250087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.250325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.250353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.250722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.250753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.251151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.251182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.251529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.251572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.251956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.251986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.252353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.252383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.978 [2024-07-25 12:45:05.252721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.978 [2024-07-25 12:45:05.252752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.978 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.253099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.253128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.253514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.253544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.253796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.253824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.254190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.254221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.254570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.254601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.254978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.255009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.255376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.255405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.255652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.255682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.256054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.256085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.256428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.256458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.256847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.256878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.257217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.257248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.257378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.257411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.257769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.257799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.258173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.258209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.258561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.258591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.258950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.258982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.259357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.259387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.259623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.259652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.259877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.259906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.260252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.260283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.260619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.260652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.261027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.261058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.261295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.261325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.261703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.261735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.262133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.262164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.262606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.262636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.262865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.262894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.263263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.263294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.263533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.263572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.263938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.263968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.264312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.264343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.264628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.264659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.264900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.264930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.265227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.265258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.265644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.265674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.979 [2024-07-25 12:45:05.266041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.979 [2024-07-25 12:45:05.266072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.979 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.266412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.266443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.266678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.266712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.267061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.267093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.267439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.267470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.267851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.267882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.268236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.268265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.268622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.268653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.268996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.269029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.269296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.269326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.269646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.269676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.270026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.270055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.270403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.270435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.270786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.270817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.271156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.271186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.271524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.271582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.271938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.271967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.272200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.272229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.272463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.272493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.272885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.272923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.273145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.273174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.273528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.273571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.273974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.274004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.274341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.274372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.274714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.274745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.275013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.275041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.275395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.275424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.275774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.275806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.276148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.276177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.276522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.276563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.276915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.276946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.277285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.277314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.277666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.277698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.278055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.278086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.278462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.278494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.278868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.278900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.279254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.279286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.980 qpair failed and we were unable to recover it. 00:32:31.980 [2024-07-25 12:45:05.279624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.980 [2024-07-25 12:45:05.279656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.280029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.280059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.280433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.280462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.280712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.280741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.281083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.281114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.281466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.281496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.281873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.281905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.282248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.282280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.282652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.282685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.283076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.283106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.283425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.283457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.283834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.283866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.284251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.284281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.284672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.284702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.285047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.285078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.285377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.285407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.285774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.285805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.286097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.286126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.286464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.286493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.286886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.286917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.287265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.287295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.287647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.287680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.288038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.288067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.288449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.288480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.288817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.288848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.289193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.289223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.289600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.289633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.289999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.290030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.290363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.290394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.290648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.290678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.291035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.291070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.981 [2024-07-25 12:45:05.291459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.981 [2024-07-25 12:45:05.291492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.981 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.291897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.291929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.292315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.292346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.292690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.292722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.293064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.293095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.293436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.293467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.293850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.293881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.294229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.294261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.294608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.294639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.295011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.295042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.295395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.295427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.295772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.295803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.296145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.296177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.296566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.296599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.296945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.296975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.297312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.297345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.297718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.297749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.298021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.298050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.298403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.298434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.298714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.298752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.299130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.299160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.299508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.299538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.299938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.299970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.300354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.300384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.300752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.300785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.301171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.301202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.301585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.301616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.301980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.302011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.302352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.302382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.302620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.302649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.303043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.303074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.303417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.303447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.303787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.303824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.304254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.304284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.304663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.304696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.305071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.305102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.305451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.305483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.982 [2024-07-25 12:45:05.305825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.982 [2024-07-25 12:45:05.305855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.982 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.306079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.306107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.306473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.306504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.306860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.306891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.307232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.307264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.307589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.307620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.307958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.307987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.308349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.308380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.308720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.308753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.309117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.309147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.309492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.309523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.309808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.309839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.310189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.310219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.310575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.310608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.310959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.310989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.311370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.311400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.311730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.311764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.312104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.312133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.312364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.312393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.312749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.312781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.313159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.313189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.313558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.313589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.313955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.313988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.314360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.314398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.314673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.314705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.315074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.315106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.315474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.315506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.315938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.315969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.316335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.316366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.316722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.316752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.317081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.317113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.317456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.317484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.317834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.317866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.318216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.318247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.318657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.318688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.319036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.319068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.319309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.983 [2024-07-25 12:45:05.319338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.983 qpair failed and we were unable to recover it. 00:32:31.983 [2024-07-25 12:45:05.319707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.319740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.320094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.320124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.320470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.320501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.320760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.320789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.321137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.321166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.321504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.321535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.321906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.321937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.322276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.322307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.322659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.322691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.323059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.323088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.323430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.323460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.323699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.323732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.324120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.324150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.324543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.324595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.324943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.324973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.325357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.325389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.325747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.325778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.326119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.326149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.326514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.326545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.326929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.326960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.327311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.327343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.327691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.327723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.328068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.328097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.328456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.328488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.328868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.328899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.329234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.329266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.329610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.329641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.330000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.330029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.330381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.330412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.330766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.330798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.331164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.331194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.331569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.331602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.331748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.331777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.332140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.332169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.332508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.332540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.332907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.332939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.333293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.984 [2024-07-25 12:45:05.333323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.984 qpair failed and we were unable to recover it. 00:32:31.984 [2024-07-25 12:45:05.333537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.333580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.333942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.333972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.334325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.334354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.334712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.334745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.335115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.335144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.335486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.335518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.335880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.335912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.336261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.336291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.336636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.336666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.337009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.337039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.337375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.337406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.337757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.337789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.338168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.338198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.338537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.338580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.338947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.338976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.339333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.339363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.339701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.339732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.340074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.340112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.340493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.340523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.340917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.340948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.341284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.341314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.341682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.341715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.342052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.342083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.342429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.342459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.342800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.342832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.343169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.343198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.343545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.343588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.343957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.343988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.344212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.344243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.344611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.344642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.345006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.345036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.345362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.345392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.345729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.345762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.346162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.346193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.346566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.346597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.346987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.347018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.985 [2024-07-25 12:45:05.347354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.985 [2024-07-25 12:45:05.347383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.985 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.347726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.347759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.348103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.348133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.348482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.348513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.348886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.348918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.349257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.349288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.349622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.349655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.350033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.350063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.350403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.350441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.350675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.350707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.351050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.351081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.351456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.351487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.351827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.351859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.352224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.352255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.352623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.352654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.353006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.353035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.353375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.353406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.353796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.353828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.354163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.354194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.354510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.354540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.354920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.354950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.355290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.355321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.355697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.355729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.356073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.356103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.356478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.356508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.356883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.356915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.357174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.357202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.357457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.357487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.357817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.357850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.358181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.358211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.358543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.358586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.358955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.986 [2024-07-25 12:45:05.358985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.986 qpair failed and we were unable to recover it. 00:32:31.986 [2024-07-25 12:45:05.359351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.359383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.359751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.359780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.360145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.360176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.360530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.360582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.360934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.360964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.361320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.361351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.361696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.361729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.362081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.362110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.362449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.362480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.362860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.362890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.363256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.363285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.363641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.363673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.364058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.364086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.364431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.364460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.364793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.364821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.365165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.365195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.365531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.365574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.365917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.365954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.366294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.366324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.366667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.366699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.367039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.367069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.367432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.367462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.367820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.367852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.368208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.368238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.368582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.368614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.368991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.369021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.369247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.369275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.369504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.369536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.369809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.369841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.370248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.370277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.370631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.370661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.371047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.371077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.371415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.371446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.371802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.371833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.372172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.372201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.372522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.372578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.987 [2024-07-25 12:45:05.372925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.987 [2024-07-25 12:45:05.372957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.987 qpair failed and we were unable to recover it. 00:32:31.988 [2024-07-25 12:45:05.373325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.988 [2024-07-25 12:45:05.373356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.988 qpair failed and we were unable to recover it. 00:32:31.988 [2024-07-25 12:45:05.373698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.988 [2024-07-25 12:45:05.373727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.988 qpair failed and we were unable to recover it. 00:32:31.988 [2024-07-25 12:45:05.374091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.988 [2024-07-25 12:45:05.374121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.988 qpair failed and we were unable to recover it. 00:32:31.988 [2024-07-25 12:45:05.374459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.988 [2024-07-25 12:45:05.374487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.988 qpair failed and we were unable to recover it. 00:32:31.988 [2024-07-25 12:45:05.374828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.988 [2024-07-25 12:45:05.374861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.988 qpair failed and we were unable to recover it. 00:32:31.988 [2024-07-25 12:45:05.375224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.988 [2024-07-25 12:45:05.375254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.988 qpair failed and we were unable to recover it. 00:32:31.988 [2024-07-25 12:45:05.375610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.988 [2024-07-25 12:45:05.375639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:31.988 qpair failed and we were unable to recover it. 00:32:31.988 [2024-07-25 12:45:05.375987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.259 [2024-07-25 12:45:05.376025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.259 qpair failed and we were unable to recover it. 00:32:32.259 [2024-07-25 12:45:05.376435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.259 [2024-07-25 12:45:05.376468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.259 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.376845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.376877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.377227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.377257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.377596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.377625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.377980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.378010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.378364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.378395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.378740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.378772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.379178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.379209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.379545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.379590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.379931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.379961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.380315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.380347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.380685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.380716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.381065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.381095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.381465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.381496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.381853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.381885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.382273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.382304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.382639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.382671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.383008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.383040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.383391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.383424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.383702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.383734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.384089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.384120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.384464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.384496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.384851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.384883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.385247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.385278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.385609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.385640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.386002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.386033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.386383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.386414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.386770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.386803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.387149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.387179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.387409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.387440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.387828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.387859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.388266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.388295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.388634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.388665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.388998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.389026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.389349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.389380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.389719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.389750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.390114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.260 [2024-07-25 12:45:05.390144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.260 qpair failed and we were unable to recover it. 00:32:32.260 [2024-07-25 12:45:05.390487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.390519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.390870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.390901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.391236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.391266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.391613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.391651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.392041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.392073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.392389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.392419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.392761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.392792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.393171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.393201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.393578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.393610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.393951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.393981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.394302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.394333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.394528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.394569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.394923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.394953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.395337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.395367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.395596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.395626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.396040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.396070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.396410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.396440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.396830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.396860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.397202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.397232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.397572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.397604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.397975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.398004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.398324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.398355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.398726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.398756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.399098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.399127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.399470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.399500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.399880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.399912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.400234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.400265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.400610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.400641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.400983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.401014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.401329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.401359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.401703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.401735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.402099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.402128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.402479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.402510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.402868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.402900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.403264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.403293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.403621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.403654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.261 [2024-07-25 12:45:05.404023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.261 [2024-07-25 12:45:05.404053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.261 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.404372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.404403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.404731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.404762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.405089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.405121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.405456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.405487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.405712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.405744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.406086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.406116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.406496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.406527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.406802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.406833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.407160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.407191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.407401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.407432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.407782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.407813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.408154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.408186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.408491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.408522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.408909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.408940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.409283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.409314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.409656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.409687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.410040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.410070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.410436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.410468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.410822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.410853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.411221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.411251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.411592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.411623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.412005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.412035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.412374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.412404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.412732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.412763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.413116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.413146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.413489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.413519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.413887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.413918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.414258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.414289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.414651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.414682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.415019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.415050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.415395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.415426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.415772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.415805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.416118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.416149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.416411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.416440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.416831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.416868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.417231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.262 [2024-07-25 12:45:05.417262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.262 qpair failed and we were unable to recover it. 00:32:32.262 [2024-07-25 12:45:05.417633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.417666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.418033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.418063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.418404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.418435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.418779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.418810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.419170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.419200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.419539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.419580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.419840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.419869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.420105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.420137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.420491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.420523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.420898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.420929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.421256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.421287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.421625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.421655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.422026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.422056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.422395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.422425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.422788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.422822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.423076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.423104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.423462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.423492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.423864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.423896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.424236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.424268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.424595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.424626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.424854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.424885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.425214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.425243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.425574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.425607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.425990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.426019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.426374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.426404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.426729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.426759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.427132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.427162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.427499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.427529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.427929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.427961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.428303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.428333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.428669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.428700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.428970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.428998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.429350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.429380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.429714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.429746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.430106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.430136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.430466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.430497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.430857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.263 [2024-07-25 12:45:05.430889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.263 qpair failed and we were unable to recover it. 00:32:32.263 [2024-07-25 12:45:05.431235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.431266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.431485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.431519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.431887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.431924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.432276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.432306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.432729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.432761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.433096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.433127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.433492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.433522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.433871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.433901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.434283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.434313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.434656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.434686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.435045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.435076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.435416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.435446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.435797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.435827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.436212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.436242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.436588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.436619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.436997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.437028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.437364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.437394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.437737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.437770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.438140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.438169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.438513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.438543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.438877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.438910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.439258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.439288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.439652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.439682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.440005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.440034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.440302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.440332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.440670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.440704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.441072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.441103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.441448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.441479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.441809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.441840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.264 [2024-07-25 12:45:05.442182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.264 [2024-07-25 12:45:05.442218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.264 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.442563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.442594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.442983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.443014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.443357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.443387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.443614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.443645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.444059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.444088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.444426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.444457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.444799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.444831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.445057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.445089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.445452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.445483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.445836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.445868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.446239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.446271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.446614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.446644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.446989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.447020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.447384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.447414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.447736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.447765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.448107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.448137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.448511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.448542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.448826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.448857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.449230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.449260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.449589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.449619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.449985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.450016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.450331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.450363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.450738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.450768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.451107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.451138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.451480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.451510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.451848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.451879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.452218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.452249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.452486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.452517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.452890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.452921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.453301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.453333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.453674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.453705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.454027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.454057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.454387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.454419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.454764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.454795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.455131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.455162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.455500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.455531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.265 qpair failed and we were unable to recover it. 00:32:32.265 [2024-07-25 12:45:05.455920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.265 [2024-07-25 12:45:05.455950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.456176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.456207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.456585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.456615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.456959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.456990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.457326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.457363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.457750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.457780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.457994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.458024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.458438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.458470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.458793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.458824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.459175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.459205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.459566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.459598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.459894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.459925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.460296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.460327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.460714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.460745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.461094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.461124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.461462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.461493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.461870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.461901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.462250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.462281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.462650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.462682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.463025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.463057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.463428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.463458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.463820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.463853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.464204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.464234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.464475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.464504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.464900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.464931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.465297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.465330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.465669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.465701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.466044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.466075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.466401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.466433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.466771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.466802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.467141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.467172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.467516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.467565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.467947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.467978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.468329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.468361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.468714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.468745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.469143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.469174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.469563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.266 [2024-07-25 12:45:05.469595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.266 qpair failed and we were unable to recover it. 00:32:32.266 [2024-07-25 12:45:05.469967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.469997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.470342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.470374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.470719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.470751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.471125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.471154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.471535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.471574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.471935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.471965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.472309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.472339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.472563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.472598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.472971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.473002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.473289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.473318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.473724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.473756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.474076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.474108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.474464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.474493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.474758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.474790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.475185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.475217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.475565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.475596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.475988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.476019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.476357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.476388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.476724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.476756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.477097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.477129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.477504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.477534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.477919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.477950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.478213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.478243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.478577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.478609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.478965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.478995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.479344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.479376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.479718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.479749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.480098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.480130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.480497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.480528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.480900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.480932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.481160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.481194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.481537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.481581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.481933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.481964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.482316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.482347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.482698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.482728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.483080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.483120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.267 [2024-07-25 12:45:05.483479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.267 [2024-07-25 12:45:05.483511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.267 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.483878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.483909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.484251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.484282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.484619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.484649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.484984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.485016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.485347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.485378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.485727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.485758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.486089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.486121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.486486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.486516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.486899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.486930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.487161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.487193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.487455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.487488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.487867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.487898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.488278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.488308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.488671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.488702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.488954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.488983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.489237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.489266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.489588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.489619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.489987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.490018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.490352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.490382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.490746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.490779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.490968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.490997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.491228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.491256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.491627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.491657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.492001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.492034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.492379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.492409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.492770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.492809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.493049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.493079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.493405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.493436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.493719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.493750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.494108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.494138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.494480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.494509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.494782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.494816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.495188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.495218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.495564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.495595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.495836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.495865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.496240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.496270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.268 [2024-07-25 12:45:05.496608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.268 [2024-07-25 12:45:05.496641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.268 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.496982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.497013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.497364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.497395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.497636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.497669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.498047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.498077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.498325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.498354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.498708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.498739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.498960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.498989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.499337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.499368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.499712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.499743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.500096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.500128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.500484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.500514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.500798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.500829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.501058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.501089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.501459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.501489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.501834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.501868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.502207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.502236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.502491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.502524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.502927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.502957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.503295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.503326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.503695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.503727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.504109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.504139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.504486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.504515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.504917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.504949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.505289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.505320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.505663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.505695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.506042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.506073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.506446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.506476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.506753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.506782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.507141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.507170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.507411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.507446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.269 [2024-07-25 12:45:05.507826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.269 [2024-07-25 12:45:05.507858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.269 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.508201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.508233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.508438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.508469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.508833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.508863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.509209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.509239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.509573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.509605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.509992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.510023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.510362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.510393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.510631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.510665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.511046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.511076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.511299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.511330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.511713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.511749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.512096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.512127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.512473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.512503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.512910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.512942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.513328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.513358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.513717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.513749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.514036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.514065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.514400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.514430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.514774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.514807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.515120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.515149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.515476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.515507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.515896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.515928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.516260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.516290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.516572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.516603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.516982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.517012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.517238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.517269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.517643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.517675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.518018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.518050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.518448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.518479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.518718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.518748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.519126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.519157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.519506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.519538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.519715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.519748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.520092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.520122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.520481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.520511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.520881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.520913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.270 qpair failed and we were unable to recover it. 00:32:32.270 [2024-07-25 12:45:05.521244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.270 [2024-07-25 12:45:05.521276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.521632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.521664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.521999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.522030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.522411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.522441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.522785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.522817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.523146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.523176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.523508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.523538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.523892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.523923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.524157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.524187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.524562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.524593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.524941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.524970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.525322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.525352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.525682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.525713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.526087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.526117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.526461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.526492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.526853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.526885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.527227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.527258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.527613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.527645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.528010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.528041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.528356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.528386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.528731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.528763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.529111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.529141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.529464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.529495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.529849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.529882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.530218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.530248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.530569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.530601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.530949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.530980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.531364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.531393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.531792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.531823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.532199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.532230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.532570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.532609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.532995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.533026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.533386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.533416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.533773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.533805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.534144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.534175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.534506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.534537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.534892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.534923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.271 qpair failed and we were unable to recover it. 00:32:32.271 [2024-07-25 12:45:05.535286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.271 [2024-07-25 12:45:05.535316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.535664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.535696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.536041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.536070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.536427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.536457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.536780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.536809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.537150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.537181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.537559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.537592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.537935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.537965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.538308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.538339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.538683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.538716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.539097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.539126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.539467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.539496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.539867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.539898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.540233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.540264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.540583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.540615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.540992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.541023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.541361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.541392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.541728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.541760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.542110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.542140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.542476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.542506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.542870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.542902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.543142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.543174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.543560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.543592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.543929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.543960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.544303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.544333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.544676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.544709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.545079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.545110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.545464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.545495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.545863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.545895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.546235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.546266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.546621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.546653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.546985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.547017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.547363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.547394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.547772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.547803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.548193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.548224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.548570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.548602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.548953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.272 [2024-07-25 12:45:05.548984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.272 qpair failed and we were unable to recover it. 00:32:32.272 [2024-07-25 12:45:05.549409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.549439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.549778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.549811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.550036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.550068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.550418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.550450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.550797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.550828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.551214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.551243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.551588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.551620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.551893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.551924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.552275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.552306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.552545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.552587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.552945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.552976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.553350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.553380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.553729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.553761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.554145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.554176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.554414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.554443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.554835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.554866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.555234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.555265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.555606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.555637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.555980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.556010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.556340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.556371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.556756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.556787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.557137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.557167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.557508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.557537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.557917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.557948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.558280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.558316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.558665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.558697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.559034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.559065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.559293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.559327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.559703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.559734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.560093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.560124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.560525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.560565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.560891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.560923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.561285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.561316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.561658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.561691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.562039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.562069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.562414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.562444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.562797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.273 [2024-07-25 12:45:05.562831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.273 qpair failed and we were unable to recover it. 00:32:32.273 [2024-07-25 12:45:05.563184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.563215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.563569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.563600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.563960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.563991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.564329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.564359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.564683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.564714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.565082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.565113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.565526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.565577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.565930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.565959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.566322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.566352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.566688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.566719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.567060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.567089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.567319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.567351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.567729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.567759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.568121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.568151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.568493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.568522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.568948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.568982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.569335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.569366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.569698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.569729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.570097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.570127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.570457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.570485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.570847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.570879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.571225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.571257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.571611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.571645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.572034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.572065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.572416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.572447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.572802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.572836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.573152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.573184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.573572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.573605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.573974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.574011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.574376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.574407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.574780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.574814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.274 [2024-07-25 12:45:05.575188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.274 [2024-07-25 12:45:05.575219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.274 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.575588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.575621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.575868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.575902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.576291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.576323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.576676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.576709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.577067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.577097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.577452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.577481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.577918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.577949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.578280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.578312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.578670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.578702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.579059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.579090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.579462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.579493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.579877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.579910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.580296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.580327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.580741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.580773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.581116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.581147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.581534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.581578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.581972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.582001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.582338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.582368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.582709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.582742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.583085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.583115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.583467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.583498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.583869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.583901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.584241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.584271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.584600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.584637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.585016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.585047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.585384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.585415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.585764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.585795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.586176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.586205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.586572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.586604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.586933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.586963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.587308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.587338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.587680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.587710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.588090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.588121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.588456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.588489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.588820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.588852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.589231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.589263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.275 [2024-07-25 12:45:05.589564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.275 [2024-07-25 12:45:05.589597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.275 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.589964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.589996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.590336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.590369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.590730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.590760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.591101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.591132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.591463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.591492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.591861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.591892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.592237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.592269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.592646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.592677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.593056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.593086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.593487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.593518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.593895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.593926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.594244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.594273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.594615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.594646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.594989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.595019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.595384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.595416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.595680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.595710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.596068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.596098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.596321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.596352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.596698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.596730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.597074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.597107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.597479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.597509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.597774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.597807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.598170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.598201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.598573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.598605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.598988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.599018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.599355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.599386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.599760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.599792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.600149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.600188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.600535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.600579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.600930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.600963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.601342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.601374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.601667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.601699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.602041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.602071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.602409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.602439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.602811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.602843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.603205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.603236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.276 [2024-07-25 12:45:05.603579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.276 [2024-07-25 12:45:05.603611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.276 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.603880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.603910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.604250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.604280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.604594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.604626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.604981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.605013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.605382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.605414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.605736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.605770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.606116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.606146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.606525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.606568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.606837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.606866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.607210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.607239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.607667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.607698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.608041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.608071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.608408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.608438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.608795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.608826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.609174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.609207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.609573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.609605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.609979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.610010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.610390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.610426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.610754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.610787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.611124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.611155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.611499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.611531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.611965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.611997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.612316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.612348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.612573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.612603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.612970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.613002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.613377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.613406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.613757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.613792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.614164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.614196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.614531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.614573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.614926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.614955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.615310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.615340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.615715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.615748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.616102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.616134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.616500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.616533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.616925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.616955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.617309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.617342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.277 qpair failed and we were unable to recover it. 00:32:32.277 [2024-07-25 12:45:05.617689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.277 [2024-07-25 12:45:05.617721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.618072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.618101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.618459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.618490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.618692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.618725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.618982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.619016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.619284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.619314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.619668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.619700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.620064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.620095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.620439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.620471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.620824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.620855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.621203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.621234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.621572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.621602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.621940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.621973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.622317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.622347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.622665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.622696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.623062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.623092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.623409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.623440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.623789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.623821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.624165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.624195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.624536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.624581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.624925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.624956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.625305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.625336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.625659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.625698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.626061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.626091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.626430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.626459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.626806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.626837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.627066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.627100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.627464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.627493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.627930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.627964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.628329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.628359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.628674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.628705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.629079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.629109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.629455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.629487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.629835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.629869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.630253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.630285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.630652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.630682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.631061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.278 [2024-07-25 12:45:05.631092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.278 qpair failed and we were unable to recover it. 00:32:32.278 [2024-07-25 12:45:05.631428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.631460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.631815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.631846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.632224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.632254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.632589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.632621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.633001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.633032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.633387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.633418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.633727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.633760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.634169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.634200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.634389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.634422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.634779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.634810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.635153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.635182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.635506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.635536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.635925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.635964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.636330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.636363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.636704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.636735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.637101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.637131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.637478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.637510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.637861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.637893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.638254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.638284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.638621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.638656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.638884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.638919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.639261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.639292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.639634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.639665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.640043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.640074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.640450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.640482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.640864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.640896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.641257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.641290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.641640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.641670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.642017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.642048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.642393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.642424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.642653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.642685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.279 [2024-07-25 12:45:05.643032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.279 [2024-07-25 12:45:05.643064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.279 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.643393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.643423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.643748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.643780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.644144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.644172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.644514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.644543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.644929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.644960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.645310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.645340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.645677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.645706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.646050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.646081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.646470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.646501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.646803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.646834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.647174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.647205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.647545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.647598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.647971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.648002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.648362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.648392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.648715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.648746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.649084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.649114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.649457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.649488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.649817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.649848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.650187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.650217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.650565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.650596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.650939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.650970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.651321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.651359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.651696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.651729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.652111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.652141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.652480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.652512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.652930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.652961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.653282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.653313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.653652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.653683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.654051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.654082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.654454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.654485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.654832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.654862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.655101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.655131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.655474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.655505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.655894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.655924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.656259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.656290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.656640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.656672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.280 [2024-07-25 12:45:05.657008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.280 [2024-07-25 12:45:05.657039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.280 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.657366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.657397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.657732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.657764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.658130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.658161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.658512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.658542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.658746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.658775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.659114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.659144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.659497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.659527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.659960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.659991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.660299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.660331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.660669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.660700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.661050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.661080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.661417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.661449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.661811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.661842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.662179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.662209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.662567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.662599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.662938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.662969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.663302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.663333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.663673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.663704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.664055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.664085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.664312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.664341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.664596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.664626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.664981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.665011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.665349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.665381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.665777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.665808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.666171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.666201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.666562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.666594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.666957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.666988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.667328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.667359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.667677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.667708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.668062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.668093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.281 [2024-07-25 12:45:05.668360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.281 [2024-07-25 12:45:05.668390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.281 qpair failed and we were unable to recover it. 00:32:32.554 [2024-07-25 12:45:05.668737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.554 [2024-07-25 12:45:05.668773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.554 qpair failed and we were unable to recover it. 00:32:32.554 [2024-07-25 12:45:05.669119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.554 [2024-07-25 12:45:05.669150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.554 qpair failed and we were unable to recover it. 00:32:32.554 [2024-07-25 12:45:05.669485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.554 [2024-07-25 12:45:05.669516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.554 qpair failed and we were unable to recover it. 00:32:32.554 [2024-07-25 12:45:05.669767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.554 [2024-07-25 12:45:05.669801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.554 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.670145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.670178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.670535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.670577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.670790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.670819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.671168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.671200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.671525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.671568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.671958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.671988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.672325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.672357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.672698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.672728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.673117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.673147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.673462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.673494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.673844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.673875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.674115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.674144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.674505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.674536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.674842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.674873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.675207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.675237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.675570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.675602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.675945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.675976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.676336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.676373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.676723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.676756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.677124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.677155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.677491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.677523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.677856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.677888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.678231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.678261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.678599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.678629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.678970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.679000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.679350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.679380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.679622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.679651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.680004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.680035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.680386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.680415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.680743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.680773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.681120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.681150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.681537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.681593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.681948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.681979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.682336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.682367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.555 [2024-07-25 12:45:05.682706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.555 [2024-07-25 12:45:05.682738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.555 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.683079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.683110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.683477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.683507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.683837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.683868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.684203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.684233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.684571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.684603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.684923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.684954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.685312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.685342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.685676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.685707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.686066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.686096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.686433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.686463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.686824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.686855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.687197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.687227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.687570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.687601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.687964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.687996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.688348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.688379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.688719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.688753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.689111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.689141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.689522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.689566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.689956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.689986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.690324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.690354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.690699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.690732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.691106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.691136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.691542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.691585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.692005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.692042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.692409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.692440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.692818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.692849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.693202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.693232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.693634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.693665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.693993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.694024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.694371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.694401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.694717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.694747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.695002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.695034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.695413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.695443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.695775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.695807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.696159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.556 [2024-07-25 12:45:05.696192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.556 qpair failed and we were unable to recover it. 00:32:32.556 [2024-07-25 12:45:05.696448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.696480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.696826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.696857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.697205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.697237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.697590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.697621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.697955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.697989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.698327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.698357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.698710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.698742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.699126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.699157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.699490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.699521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.699888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.699919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.700289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.700319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.700613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.700643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.700977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.701006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.701346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.701376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.701757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.701787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.702149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.702186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.702522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.702571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.702933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.702963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.703307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.703339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.703687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.703719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.704087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.704115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.704313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.704343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.704712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.704744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.705139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.705169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.705512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.705544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.705917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.705947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.706283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.706313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.706700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.706731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.707071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.707101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.707495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.707525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.707891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.707924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.708273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.708303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.708592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.708623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.708879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.557 [2024-07-25 12:45:05.708911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.557 qpair failed and we were unable to recover it. 00:32:32.557 [2024-07-25 12:45:05.709272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.709302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.709659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.709690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.710021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.710053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.710384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.710414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.710672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.710703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.711056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.711086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.711324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.711356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.711729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.711761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.711987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.712018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.712382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.712414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.712776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.712807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.713156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.713186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.713527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.713568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.713835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.713866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.714205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.714237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.714579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.714610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.714989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.715020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.715353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.715384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.715722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.715754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.716110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.716140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.716483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.716514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.716869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.716901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.717251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.717288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.717617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.717649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.718021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.718050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.718406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.718436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.718780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.718812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.719155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.719185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.719537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.719584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.719924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.719959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.720341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.720372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.720742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.720773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.721014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.721044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.721405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.558 [2024-07-25 12:45:05.721435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.558 qpair failed and we were unable to recover it. 00:32:32.558 [2024-07-25 12:45:05.721791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.721822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.722162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.722195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.722543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.722599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.722954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.722984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.723312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.723342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.723705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.723737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.724099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.724129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.724366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.724394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.724641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.724674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.725039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.725069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.725298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.725327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.725565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.725594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.725973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.726003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.726372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.726402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.726835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.726869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.727254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.727291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.727670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.727701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.728045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.728074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.728422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.728453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.728791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.728823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.729060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.729091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.729462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.729493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.729771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.729805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.730056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.730085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.730436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.730466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.730816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.730849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.731088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.731122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.731483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.731513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.731889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.731921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.559 qpair failed and we were unable to recover it. 00:32:32.559 [2024-07-25 12:45:05.732317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.559 [2024-07-25 12:45:05.732349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.732720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.732751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.733083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.733114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.733503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.733533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.733895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.733926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.734206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.734236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.734592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.734623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.734871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.734901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.735218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.735249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.735480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.735514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.735929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.735961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.736299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.736332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.736546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.736587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.736936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.736969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.737335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.737365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.737711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.737742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.738101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.738132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.738473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.738501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.738893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.738927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.739292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.739321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.739667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.739699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.740046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.740077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.740447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.740478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.740819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.740852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.741196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.741229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.741463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.741494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.741873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.741905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.742272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.742311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.742668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.742701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.743070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.743100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.743461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.743492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.743768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.743799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.744138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.744170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.560 qpair failed and we were unable to recover it. 00:32:32.560 [2024-07-25 12:45:05.744517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.560 [2024-07-25 12:45:05.744560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.744921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.744954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.745279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.745311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.745535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.745582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.745864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.745897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.746285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.746317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.746709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.746740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.747081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.747112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.747466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.747498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.747865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.747898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.748281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.748313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.748657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.748689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.749071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.749102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.749422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.749452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.749798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.749829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.750173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.750204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.750561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.750593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.750999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.751029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.751412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.751444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.751797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.751829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.752060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.752088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.752415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.752451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.752798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.752832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.753184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.753214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.753444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.753473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.753824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.753856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.754199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.754230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.754585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.754615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.754968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.755000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.755367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.755398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.755801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.755833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.756219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.756250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.756482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.756514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.756892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.756926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.561 [2024-07-25 12:45:05.757285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.561 [2024-07-25 12:45:05.757316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.561 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.757672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.757707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.757828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.757858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.758228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.758260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.758512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.758541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.758788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.758822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.759191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.759220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.759580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.759612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.759975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.760007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.760367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.760399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.760623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.760653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.761020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.761051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.761420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.761451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.761799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.761833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.762166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.762198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.762584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.762617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.762850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.762880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.763251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.763281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.763624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.763655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.763898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.763928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.764288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.764320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.764659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.764690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.765029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.765061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.765445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.765476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.765718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.765748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.765972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.766003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.766339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.766368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.766584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.766616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.766982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.767018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.767387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.767418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.767768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.767800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.768163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.768194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.768537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.768578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.769014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.769044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.562 [2024-07-25 12:45:05.769324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.562 [2024-07-25 12:45:05.769353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.562 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.769712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.769745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.769976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.770004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.770350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.770380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.770727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.770759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.771110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.771142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.771485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.771517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.771872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.771903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.772293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.772323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.772617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.772647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.773003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.773034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.773376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.773407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.773832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.773863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.774215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.774246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.774588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.774618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.774985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.775015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.775390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.775420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.775825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.775856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.776197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.776228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.776572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.776604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.776972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.777003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.777354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.777385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.777677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.777708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.778065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.778096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.778444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.778475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.778838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.778871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.779237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.779268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.779604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.779636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.779991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.780022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.780376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.780406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.780764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.780796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.781164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.781195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.781539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.781603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.781970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.782002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.782343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.782374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.563 qpair failed and we were unable to recover it. 00:32:32.563 [2024-07-25 12:45:05.782762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.563 [2024-07-25 12:45:05.782795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.783135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.783168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.783534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.783579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.783950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.783980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.784331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.784362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.784707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.784738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.785088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.785119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.785452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.785481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.785837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.785870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.786206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.786238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.786590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.786622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.786968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.787001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.787335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.787366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.787761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.787792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.788055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.788088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.788459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.788489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.788822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.788855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.789198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.789228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.789576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.789606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.789980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.790011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.790217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.790250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.790602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.790635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.791013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.791045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.791416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.791446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.791831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.791862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.792240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.792272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.792633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.792665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.793034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.793072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.793449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.793481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.793818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.793850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.794236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.794268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.794601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.794637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.795011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.795042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.795384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.795416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.795813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.795845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.564 [2024-07-25 12:45:05.796187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.564 [2024-07-25 12:45:05.796219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.564 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.796570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.796602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.796962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.796992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.797340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.797373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.797709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.797741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.798083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.798115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.798481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.798515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.798873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.798905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.799154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.799183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.799522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.799566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.799936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.799965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.800316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.800347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.800660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.800690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.801046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.801077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.801419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.801449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.801816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.801847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.802174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.802204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.802561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.802595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.802966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.802998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.803356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.803387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.803753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.803786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.804137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.804169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.804516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.804559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.804942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.804974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.805340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.805370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.805718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.805751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.806093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.806123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.806493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.806525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.806799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.806833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.807174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.807205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.565 [2024-07-25 12:45:05.807485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.565 [2024-07-25 12:45:05.807517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.565 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.807771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.807805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.808169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.808200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.808610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.808642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.809001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.809032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.809343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.809373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.809709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.809742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.810093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.810124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.810487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.810517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.810906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.810937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.811273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.811305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.811647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.811680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.812046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.812076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.812428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.812458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.812796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.812827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.813164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.813194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.813541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.813587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.813983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.814013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.814347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.814378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.814774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.814804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.815028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.815056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.815419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.815448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.815793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.815823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.816162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.816192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.816533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.816573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.816967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.816998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.817350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.817382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.817731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.817762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.818141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.818171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.818488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.818520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.818795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.818833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.819205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.819237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.819579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.819612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.820007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.820038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.820375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.820406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.820767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.820796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.566 [2024-07-25 12:45:05.821115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.566 [2024-07-25 12:45:05.821147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.566 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.821368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.821403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.821792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.821822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.822164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.822194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.822609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.822641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.822985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.823015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.823356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.823386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.823726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.823759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.824124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.824156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.824487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.824517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.824882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.824914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.825249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.825280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.825535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.825579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.825951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.825981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.826322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.826353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.826698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.826730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.827097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.827127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.827477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.827508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.827873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.827906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.828245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.828275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.828612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.828644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.829033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.829063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.829403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.829434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.829796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.829828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.830167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.830196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.830541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.830584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.830939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.830969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.831314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.831345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.831652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.831684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.832067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.832098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.832420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.832452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.832807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.832838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.833057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.833088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.567 [2024-07-25 12:45:05.833447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.567 [2024-07-25 12:45:05.833478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.567 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.833813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.833845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.834193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.834232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.834586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.834619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.835014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.835043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.835383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.835414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.835741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.835774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.836151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.836185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.836580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.836612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.837002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.837034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.837269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.837302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.837746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.837778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.838124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.838156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.838501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.838532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.838907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.838939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.839280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.839310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.839676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.839708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.839979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.840010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.840391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.840422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.840771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.840804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.841145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.841176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.841543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.841586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.841926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.841956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.842298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.842329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.842692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.842725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.843068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.843098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.843437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.843469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.843816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.843847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.844193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.844224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.844573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.844612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.845006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.845039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.845405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.845437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.845780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.845813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.846180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.846214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.846569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.846602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.847010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.568 [2024-07-25 12:45:05.847041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.568 qpair failed and we were unable to recover it. 00:32:32.568 [2024-07-25 12:45:05.847362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.847393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.847731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.847762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.848018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.848050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.848391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.848421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.848651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.848683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.849032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.849061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.849409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.849442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.849784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.849818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.850161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.850193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.850529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.850573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.850940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.850972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.851303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.851336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.851724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.851758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.852106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.852140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.852450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.852484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.852848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.852882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.853225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.853259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.853612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.853646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.854051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.854082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.854461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.854494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.854856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.854889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.855235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.855267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.855644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.855676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.856040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.856070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.856422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.856453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.856803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.856834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.857242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.857272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.857627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.857660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.857886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.857919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.858265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.858297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.858633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.858666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.859034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.859066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.859413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.859444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.859816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.859848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.569 [2024-07-25 12:45:05.860225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.569 [2024-07-25 12:45:05.860269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.569 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.860616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.860646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.861006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.861039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.861358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.861389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.861725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.861759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.862100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.862132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.862475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.862507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.862878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.862909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.865029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.865095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.865368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.865401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.865792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.865826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.866175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.866207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.866621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.866655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.867037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.867070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.867468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.867501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.867882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.867915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.868284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.868314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.868659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.868690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.868932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.868966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.869315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.869348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.869684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.869717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.870059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.870089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.870403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.870434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.870719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.870750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.871116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.871149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.871520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.871579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.871963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.871994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.872366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.872403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.570 [2024-07-25 12:45:05.872766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.570 [2024-07-25 12:45:05.872797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.570 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.873146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.873178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.873529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.873576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.873838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.873869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.874243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.874274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.874621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.874653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.875003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.875035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.875303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.875343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.875706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.875738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.876029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.876060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.876384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.876414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.876778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.876809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.877058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.877088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.877484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.877515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.877809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.877843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.878210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.878241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.878582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.878614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.878964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.878995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.879356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.879387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.879596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.879628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.879970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.880001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.880330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.880362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.880684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.880717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.880940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.880969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.881317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.881346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.881698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.881741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.882107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.882136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.882511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.882543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.882936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.882967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.884200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.884252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.884655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.884688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.885078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.885110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.885492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.885522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.885920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.885951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.886323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.886355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.886718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.886749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.887100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.887134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.571 qpair failed and we were unable to recover it. 00:32:32.571 [2024-07-25 12:45:05.887520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.571 [2024-07-25 12:45:05.887561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.887940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.887972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.888352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.888384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.888640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.888679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.889053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.889085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.889434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.889465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.889723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.889754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.890130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.890160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.890510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.890541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.890933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.890964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.891350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.891382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.891730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.891761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.892110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.892141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.892504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.892535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.892810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.892845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.893199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.893228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.893568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.893601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.893997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.894028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.894359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.894391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.894669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.894701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.895087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.895117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.895484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.895515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.895881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.895913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.896261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.896292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.896528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.896572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.896968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.896999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.897346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.897376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.897722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.897753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.898035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.898065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.898461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.898494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.898870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.898925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.899282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.899315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.899722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.899755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.900116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.900146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.572 [2024-07-25 12:45:05.900542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.572 [2024-07-25 12:45:05.900585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.572 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.900947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.900977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.901320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.901351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.901639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.901671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.902056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.902086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.902451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.902482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.902744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.902778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.903054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.903083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.903435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.903467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.903799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.903831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.904242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.904274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.904630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.904660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.905014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.905045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.905305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.905336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.905698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.905729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.906149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.906179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.906497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.906528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.906873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.906906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.907235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.907267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.907619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.907651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.907935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.907966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.908278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.908308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.908670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.908701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.909027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.909058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.909342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.909374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.909598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.909634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.909994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.910025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.910288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.910318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.910607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.910639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.911016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.911047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.911291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.911323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.911607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.911638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.912067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.912097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.912438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.912470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.912838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.912869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.913230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.913261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.913644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.573 [2024-07-25 12:45:05.913675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.573 qpair failed and we were unable to recover it. 00:32:32.573 [2024-07-25 12:45:05.914024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.914061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.914292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.914323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.914674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.914704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.915066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.915097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.915483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.915514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.915973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.916005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.916242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.916272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.916621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.916653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.917009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.917039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.917393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.917424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.917781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.917812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.918158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.918189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.918476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.918505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.918854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.918887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.919239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.919272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.919633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.919687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.920147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.920178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.920527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.920572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.920972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.921004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.921409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.921441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.921823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.921855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.922227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.922258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.922601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.922635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.922988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.923019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.923362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.923393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.923623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.923659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.924034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.924065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.924333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.924365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.924645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.924677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.925029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.925060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.925380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.925411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.925783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.925816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.926162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.926193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.926542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.926586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.926945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.926975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.574 qpair failed and we were unable to recover it. 00:32:32.574 [2024-07-25 12:45:05.927327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.574 [2024-07-25 12:45:05.927358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.927703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.927742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.928111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.928142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.928517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.928560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.928927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.928958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.929301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.929333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.929622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.929657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.929937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.929969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.930414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.930447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.930848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.930883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.931195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.931227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.931529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.931573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.931931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.931962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.932195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.932226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.932451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.932483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.932826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.932857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.933294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.933324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.933712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.933744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.934013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.934045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.934428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.934459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.934831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.934864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.935180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.935211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.935581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.935614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.935872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.935906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.936292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.936321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.936699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.936733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.937093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.937124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.937480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.937511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.937899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.575 [2024-07-25 12:45:05.937932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.575 qpair failed and we were unable to recover it. 00:32:32.575 [2024-07-25 12:45:05.938221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.938252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.938595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.938627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.938996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.939026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.939367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.939398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.939725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.939764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.940151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.940182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.940408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.940443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.940800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.940833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.941193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.941224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.941605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.941636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.941985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.942017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.942349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.942381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.942743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.942777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.943123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.943154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.943506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.943537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.943828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.943861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.944197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.944228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.944453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.944486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.944870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.944903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.945273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.945304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.945666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.945699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.946046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.946077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.946436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.946466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.946876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.946909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.947256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.947289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.947660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.947692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.948051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.948081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.948419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.948450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.948619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.948650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.948997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.949028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.949340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.949371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.949702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.949733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.950072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.950103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.950449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.576 [2024-07-25 12:45:05.950479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.576 qpair failed and we were unable to recover it. 00:32:32.576 [2024-07-25 12:45:05.950826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.950859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.951189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.951221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.951444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.951477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.951725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.951759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.952045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.952076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.952411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.952442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.952777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.952809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.953188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.953219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.953601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.953634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.954016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.954046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.954218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.954251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.954484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.954522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.954939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.954971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.955343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.955374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.955613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.955645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.956000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.956031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.956351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.956382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.956717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.956749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.957095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.957126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.957371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.957405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.957781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.957813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.958195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.958229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.958600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.958633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.958850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.958884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.959155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.959187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.959442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.959474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.959736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.959769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.960093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.960124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.960522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.960578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.960836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.960867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.961225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.961258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.961609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.961641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.961902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.961933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.962293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.962324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.962623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.962655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.577 [2024-07-25 12:45:05.963008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.577 [2024-07-25 12:45:05.963039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.577 qpair failed and we were unable to recover it. 00:32:32.850 [2024-07-25 12:45:05.963392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.850 [2024-07-25 12:45:05.963426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.850 qpair failed and we were unable to recover it. 00:32:32.850 [2024-07-25 12:45:05.963842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.850 [2024-07-25 12:45:05.963875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.964283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.964322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.964671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.964703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.965058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.965089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.965324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.965355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.965516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.965558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.965975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.966005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.966359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.966391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.966738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.966770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.967085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.967117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.967375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.967406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.967742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.967774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.967965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.967996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.968253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.968284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.968572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.968604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.969037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.969069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.969440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.969472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.969704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.969736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.970101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.970133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.970521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.970561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.970907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.970938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.971206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.971241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.971626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.971658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.972024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.972056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.972279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.972309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.972593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.972625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.973041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.973072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.973363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.973395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.973654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.973686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.974064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.974095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.974257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.974291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.974680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.974712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.975061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.975093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.975372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.975403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.975696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.975728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.976017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.976048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.851 [2024-07-25 12:45:05.976383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.851 [2024-07-25 12:45:05.976414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.851 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.976785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.976818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.977156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.977188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.977559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.977592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.977984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.978015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.978248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.978283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.978527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.978576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.978836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.978868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.979220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.979252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.979611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.979644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.980007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.980038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.980421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.980453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.980709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.980741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.981100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.981131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.981487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.981517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.982239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.982274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.982628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.982664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.982940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.982971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.983252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.983283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.983632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.983665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.984031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.984062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.984336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.984367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.984608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.984641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.985013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.985045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.985281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.985313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.985643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.985675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.986940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.986994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.987411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.987446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.987613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.987647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.988015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.988046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.988394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.988426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.988648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.988681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.989044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.989076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.989470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.989510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.989924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.989957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.990333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.990364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.990707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.852 [2024-07-25 12:45:05.990739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.852 qpair failed and we were unable to recover it. 00:32:32.852 [2024-07-25 12:45:05.991075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.991106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.991454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.991486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.991787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.991820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.992042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.992073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.992451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.992482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.992814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.992847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.993230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.993262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.993616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.993648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.993894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.993925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.994275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.994307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.994593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.994626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.994978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.995010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.995351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.995381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.995636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.995669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.996031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.996063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.996405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.996437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.996686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.996718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.997071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.997102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.997431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.997463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.997695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.997727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.998078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.998110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.998475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.998506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.998891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.998922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.999269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.999301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:05.999669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:05.999703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.000053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.000084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.000412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.000444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.000696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.000728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.001101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.001132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.001458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.001488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.001735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.001768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.002105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.002136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.002375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.002407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.002659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.002690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.003085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.003116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.003457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.003488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.853 [2024-07-25 12:45:06.003833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.853 [2024-07-25 12:45:06.003866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.853 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.004230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.004266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.004625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.004660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.005034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.005064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.005447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.005479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.005730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.005763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.006082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.006113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.006459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.006490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.006902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.006934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.007280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.007311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.007674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.007706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.008071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.008101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.008480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.008512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.008911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.008944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.009336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.009368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.009593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.009626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.009889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.009921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.010147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.010178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.010499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.010530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.010956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.010989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.011229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.011260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.011583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.011615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.012010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.012041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.012379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.012410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.012751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.012783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.013167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.013199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.013596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.013630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.013880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.013913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.014290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.014328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.014671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.014704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.015096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.015128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.015500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.015532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.015796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.015827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.016077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.016108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.016455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.016487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.016855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.016887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.017247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.854 [2024-07-25 12:45:06.017278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.854 qpair failed and we were unable to recover it. 00:32:32.854 [2024-07-25 12:45:06.017509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.017541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.017948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.017979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.018249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.018286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.018666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.018698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.019085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.019116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.019508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.019540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.019913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.019945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.020280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.020311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.020669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.020701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.021048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.021080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.021333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.021364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.021737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.021770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.022121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.022153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.022506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.022539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.022923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.022956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.023292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.023324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.023701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.023734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.024103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.024134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.024506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.024539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.024929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.024961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.025309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.025340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.025655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.025687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.026043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.026074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.026441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.026473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.026827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.026860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.027248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.027278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.027611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.027644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.028014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.028046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.028384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.028416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.028790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.855 [2024-07-25 12:45:06.028822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.855 qpair failed and we were unable to recover it. 00:32:32.855 [2024-07-25 12:45:06.029167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.029198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.029537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.029583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.031597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.031675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.032110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.032146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.032492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.032524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.032804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.032835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.033210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.033240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.033595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.033628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.033979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.034010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.034376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.034407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.034801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.034834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.035190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.035222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.035585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.035617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.035960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.035991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.036263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.036296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.036687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.036719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.037061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.037093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.037432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.037463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.037827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.037860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.038206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.038237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.038610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.038641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.038985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.039018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.039363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.039394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.039750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.039783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.040134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.040165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.040526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.040570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.040937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.040969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.041330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.041362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.041705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.041737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.042103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.042133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.042408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.042442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.042793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.042826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.043183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.043215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.043428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.043460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.043843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.043875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.044254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.044285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.856 [2024-07-25 12:45:06.044562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.856 [2024-07-25 12:45:06.044601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.856 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.044971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.045003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.045384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.045416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.045748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.045780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.046186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.046217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.046584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.046617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.046878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.046909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.047325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.047356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.047701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.047733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.048082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.048115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.048497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.048527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.048906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.048939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.049275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.049306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.049631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.049665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.050047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.050079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.050422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.050455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.050840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.050874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.051202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.051233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.051582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.051615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.052005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.052034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.052376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.052407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.052767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.052799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.053174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.053206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.053569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.053602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.054000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.054032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.054373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.054406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.054728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.054761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.055109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.055140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.055451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.055482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.055863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.055895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.056278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.056310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.056659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.056692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.057006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.057039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.057411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.057442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.057791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.057832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.058217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.058248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.058588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.058621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.857 qpair failed and we were unable to recover it. 00:32:32.857 [2024-07-25 12:45:06.058976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.857 [2024-07-25 12:45:06.059007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.059346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.059378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.059737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.059768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.059995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.060029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.060400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.060431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.060774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.060805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.061125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.061156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.061525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.061567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.061914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.061946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.062284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.062316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.062634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.062667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.063043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.063075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.063421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.063452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.063795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.063827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.064145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.064177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.064541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.064587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.064949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.064980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.065366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.065397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.065754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.065788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.066127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.066158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.066496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.066527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.066908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.066939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.067260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.067291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.067639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.067672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.068025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.068057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.068429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.068461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.068804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.068836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.069220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.069251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.069592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.069623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.070001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.070031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.070370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.070403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.070728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.070760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.071149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.071181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.071531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.071588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.071970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.072001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.072369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.072402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.072727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.072759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.858 qpair failed and we were unable to recover it. 00:32:32.858 [2024-07-25 12:45:06.075199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.858 [2024-07-25 12:45:06.075273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.075719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.075758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.076124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.076156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.076467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.076498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.076862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.076894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.077281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.077313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.077652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.077687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.078036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.078067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.078415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.078445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.078828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.078860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.079205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.079236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.079583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.079615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.079986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.080018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.080404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.080436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.080746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.080777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.081117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.081148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.081468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.081500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.081851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.081884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.082251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.082282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.082621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.082654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.083027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.083058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.083417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.083449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.083793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.083825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.084193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.084224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.084569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.084601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.084948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.084980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.085315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.085346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.085717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.085750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.086079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.086116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.086469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.086501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.086858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.086891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.087232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.087264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.087600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.087633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.087972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.088002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.088347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.088377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.088617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.088653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.859 [2024-07-25 12:45:06.089004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.859 [2024-07-25 12:45:06.089035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.859 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.089357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.089390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.089633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.089668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.090034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.090065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.090428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.090459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.090717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.090750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.091133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.091163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.091510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.091544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.094029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.094099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.094513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.094570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.094929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.094963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.095299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.095331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.095697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.095730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.096073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.096104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.096446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.096480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.096820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.096853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.097177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.097209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.097568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.097600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.097990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.098022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.098400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.098431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.098791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.098824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.099153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.099184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.099522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.099565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.099930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.099960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.100301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.100333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.100655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.100688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.101073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.101103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.101444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.101477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.101797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.101830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.102220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.102249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.860 [2024-07-25 12:45:06.102587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.860 [2024-07-25 12:45:06.102619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.860 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.102995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.103027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.103369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.103401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.103754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.103793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.104144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.104175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.104614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.104648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.105031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.105064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.105448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.105480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.105859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.105891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.106236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.106267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.106611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.106643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.107026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.107056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.107386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.107416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.107771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.107804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.108135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.108166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.108479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.108511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.108892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.108925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.109301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.109332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.109702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.109735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.110081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.110112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.110516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.110558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.110989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.111020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.111423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.111455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.111735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.111767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.112107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.112139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.112583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.112616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.112936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.112969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.113330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.113362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.113771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.113803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.114170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.114201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.114568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.114608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.115006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.115037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.115403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.115433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.115786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.115819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.116200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.116231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.116575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.116608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.116975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.117008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.861 [2024-07-25 12:45:06.117363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.861 [2024-07-25 12:45:06.117395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.861 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.117618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.117651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.118033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.118064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.118399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.118431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.118657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.118689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.119016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.119047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.119369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.119400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.119759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.119791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.120169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.120200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.120538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.120587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.120981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.121013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.121384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.121414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.121693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.121725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.122108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.122139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.122377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.122410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.122803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.122834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.123198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.123229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.123593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.123625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.124000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.124031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.124372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.124402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.124727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.124759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.125100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.125131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.125484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.125517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.125856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.125889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.126230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.126262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.126599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.126632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.127023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.127054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.127390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.127421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.127822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.127855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.130293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.130360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.130669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.130710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.131069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.131102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.131471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.131502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.131863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.131895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.132247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.132288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.132648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.132681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.133037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.862 [2024-07-25 12:45:06.133068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.862 qpair failed and we were unable to recover it. 00:32:32.862 [2024-07-25 12:45:06.133454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.133484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.133847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.133878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.134225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.134256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.134590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.134622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.134990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.135022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.135372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.135402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.135726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.135766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.136118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.136151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.136531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.136591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.136943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.136976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.137346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.137378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.137738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.137772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.138138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.138169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.138524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.138570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.138819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.138851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.139221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.139253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.139582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.139615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.139967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.139998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.140336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.140367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.140780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.140812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.141152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.141182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.141503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.141534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.141930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.141965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.142334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.142364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.142704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.142744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.142990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.143021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.143362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.143393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.143727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.143760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.144129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.144162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.144499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.144530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.144918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.144950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.145292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.145323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.145674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.145705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.146073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.146106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.146443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.146474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.146754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.146786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.863 [2024-07-25 12:45:06.147126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.863 [2024-07-25 12:45:06.147158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.863 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.147504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.147536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.147905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.147937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.148306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.148337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.148721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.148756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.149091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.149124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.149494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.149524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.149799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.149834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.150205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.150236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.150570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.150602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.150965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.150996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.151338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.151371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.151702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.151733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.152069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.152100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.152467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.152498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.152858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.152891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.153261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.153293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.153770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.153817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.154190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.154228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.154591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.154626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.154974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.155005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.155356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.155388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.155726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.155758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.156133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.156164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.156503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.156534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.156927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.156960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.157190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.157224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.157584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.157616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.157989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.158020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.158338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.158377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.158743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.158776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.159159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.159190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.159532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.159576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.159949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.159981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.160366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.160397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.160728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.160761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.161126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.161158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.864 [2024-07-25 12:45:06.161502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.864 [2024-07-25 12:45:06.161534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.864 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.161899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.161931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.162200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.162231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.162592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.162626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.162998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.163030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.163394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.163425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.163791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.163824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.164164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.164196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.164537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.164582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.164831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.164862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.165229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.165260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.165530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.165573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.165830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.165864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.166204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.166236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.166578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.166612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.166960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.166992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.167332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.167363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.167729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.167762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.168197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.168229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.168580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.168613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.169013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.169045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.169389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.169420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.169645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.169682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.169923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.169953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.170295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.170327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.170685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.170720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.171103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.171136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.171481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.171512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.171865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.171898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.172263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.172294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.172665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.172697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.865 qpair failed and we were unable to recover it. 00:32:32.865 [2024-07-25 12:45:06.173050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.865 [2024-07-25 12:45:06.173081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.173434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.173466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.173836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.173868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.174231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.174261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.174609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.174642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.174892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.174927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.175256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.175286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.175621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.175652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.176021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.176055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.176436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.176467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.176848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.176881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.177266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.177298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.177644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.177677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.177954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.177987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.178334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.178369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.178756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.178790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.179162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.179194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.179507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.179538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.179927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.179959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.180336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.180371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.180745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.180779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.181129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.181160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.181539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.181585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.181939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.181970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.182353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.182385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.182730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.182763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.183114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.183145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.183486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.183518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.183855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.183886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.184231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.184267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.184609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.184641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.184993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.185026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.185391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.185421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.185779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.185812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.186102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.186134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.186498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.186529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.186937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.866 [2024-07-25 12:45:06.186968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.866 qpair failed and we were unable to recover it. 00:32:32.866 [2024-07-25 12:45:06.187235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.187267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.187608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.187640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.188015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.188047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.188434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.188466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.188815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.188848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.189214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.189247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.189591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.189626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.189995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.190027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.190345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.190376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.190727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.190759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.191127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.191159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.191526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.191592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.191951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.191983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.192353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.192384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.192736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.192768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.193105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.193137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.193500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.193531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.193900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.193932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.194270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.194303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.194645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.194678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.195027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.195059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.195408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.195439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.195806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.195839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.196179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.196212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.196574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.196608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.196975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.197007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.197373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.197405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.197740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.197772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.198153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.198183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.198579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.198611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.198960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.198991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.199332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.199363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.199689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.199721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.200087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.200119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.200489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.200520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.200852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.200884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.201209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.867 [2024-07-25 12:45:06.201240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.867 qpair failed and we were unable to recover it. 00:32:32.867 [2024-07-25 12:45:06.201581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.201614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.201963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.201994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.202350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.202381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.202768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.202801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.203110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.203142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.203518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.203561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.203920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.203951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.204293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.204324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.204689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.204721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.205080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.205111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.205464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.205495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.205854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.205887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.206230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.206261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.206603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.206635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.206979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.207010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.207359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.207390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.207723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.207755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.208080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.208111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.208443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.208474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.208740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.208774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.209119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.209151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.209494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.209525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.209911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.209943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.210298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.210336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.210705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.210738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.211080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.211111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.211453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.211484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.211852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.211884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.212228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.212259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.212483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.212519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.212913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.212945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.213292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.213322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.213660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.213691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.214033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.214064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.214407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.214437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.214826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.214858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.868 [2024-07-25 12:45:06.215117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.868 [2024-07-25 12:45:06.215148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.868 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.215426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.215457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.215813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.215846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.216195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.216226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.216569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.216600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.216952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.216982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.217338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.217370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.217722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.217754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.218144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.218175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.218516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.218563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.218929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.218962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.219327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.219358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.219534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.219580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.219823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.219855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.220083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.220115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.220517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.220562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.220925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.220958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.221341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.221371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.221724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.221755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.222118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.222149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.222517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.222571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.222929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.222960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.223310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.223342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.223712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.223744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.224114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.224144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.224523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.224568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.224840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.224871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.225246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.225277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.225617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.225656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.226038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.226069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.226337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.226370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.226737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.226769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.227005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.227036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.227380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.227411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.227663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.227695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.228068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.228100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.228476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.228507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.869 qpair failed and we were unable to recover it. 00:32:32.869 [2024-07-25 12:45:06.228883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.869 [2024-07-25 12:45:06.228915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.229142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.229173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.229501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.229533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.229785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.229817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.230229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.230261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.230653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.230687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.230925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.230956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.231191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.231223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.231471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.231502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.231875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.231907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.232245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.232276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.232636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.232667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.233061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.233093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.233439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.233470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.233818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.233850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.234196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.234228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.234606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.234638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.234907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.234938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.235343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.235381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.235726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.235759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.236099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.236132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.236529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.236576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.236926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.236957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.237296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.237327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.237748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.237780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.237916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.237951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.238332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.238364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.238675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.238707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.239055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.239086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.870 [2024-07-25 12:45:06.239432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.870 [2024-07-25 12:45:06.239462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.870 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.239852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.239887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.240225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.240257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.240605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.240637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.241015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.241046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.241274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.241308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.241663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.241694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.242082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.242114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.242505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.242538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.242784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.242816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.243182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.243213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.243566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.243598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.243985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.244016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.244342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.244376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.244757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.244790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.245135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.245167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.245515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.245561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.245936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.245968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.246265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.246296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.246523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.246586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.246835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.246866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.247218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.247249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.247663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.247696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.248085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.248116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.248506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.248537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.248939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.248971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.249353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.249385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.249625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.249659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.250019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.250051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.250418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.250449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.250804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.250843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.251188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.251220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.251441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.251473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.251826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.251859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.252193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.252223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.252568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.252601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.252840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.252871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.871 qpair failed and we were unable to recover it. 00:32:32.871 [2024-07-25 12:45:06.253206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.871 [2024-07-25 12:45:06.253238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.872 qpair failed and we were unable to recover it. 00:32:32.872 [2024-07-25 12:45:06.253593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.872 [2024-07-25 12:45:06.253626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.872 qpair failed and we were unable to recover it. 00:32:32.872 [2024-07-25 12:45:06.256088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.872 [2024-07-25 12:45:06.256159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.872 qpair failed and we were unable to recover it. 00:32:32.872 [2024-07-25 12:45:06.256507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.872 [2024-07-25 12:45:06.256544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.872 qpair failed and we were unable to recover it. 00:32:32.872 [2024-07-25 12:45:06.256954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.872 [2024-07-25 12:45:06.256988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.872 qpair failed and we were unable to recover it. 00:32:32.872 [2024-07-25 12:45:06.257213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.872 [2024-07-25 12:45:06.257245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.872 qpair failed and we were unable to recover it. 00:32:32.872 [2024-07-25 12:45:06.257594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.872 [2024-07-25 12:45:06.257627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.872 qpair failed and we were unable to recover it. 00:32:32.872 [2024-07-25 12:45:06.257999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.872 [2024-07-25 12:45:06.258030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.872 qpair failed and we were unable to recover it. 00:32:32.872 [2024-07-25 12:45:06.258384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.872 [2024-07-25 12:45:06.258415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.872 qpair failed and we were unable to recover it. 00:32:32.872 [2024-07-25 12:45:06.258740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.872 [2024-07-25 12:45:06.258773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:32.872 qpair failed and we were unable to recover it. 00:32:32.872 [2024-07-25 12:45:06.259116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.259148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.260990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.261051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.261502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.261538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.261931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.261965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.262305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.262335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.262687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.262719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.263090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.263122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.263470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.263502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.263881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.263915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.264156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.264187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.264508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.264562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.264920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.264951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.265199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.265233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.265609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.265641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.265893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.265927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.266297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.266329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.266670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.266703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.266952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.145 [2024-07-25 12:45:06.266983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.145 qpair failed and we were unable to recover it. 00:32:33.145 [2024-07-25 12:45:06.267338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.267369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.267709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.267743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.268100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.268132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.268387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.268422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.268777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.268809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.269142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.269174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.269541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.269586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.269953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.269985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.270334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.270365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.270749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.270781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.271127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.271158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.271496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.271526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.271945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.271979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.272325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.272355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.272687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.272720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.273078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.273110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.273451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.273482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.273728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.273761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.274117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.274149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.274492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.274524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.274898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.274930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.275275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.275306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.275674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.275707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.276142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.276174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.276444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.276475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.276856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.276887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.277265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.277297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.277654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.277685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.278010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.278041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.278401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.278431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.278691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.278725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.279099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.279130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.279483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.279515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.279881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.279919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.280277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.280308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.280535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.280584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.146 [2024-07-25 12:45:06.280966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.146 [2024-07-25 12:45:06.280997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.146 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.281329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.281362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.281707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.281740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.282115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.282146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.282497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.282529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.282815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.282847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.283192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.283223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.283588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.283620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.284045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.284078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.284441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.284472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.284821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.284853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.285223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.285254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.285584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.285617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.285885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.285916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.286254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.286286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.286517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.286563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.286956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.286988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.287325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.287356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.287697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.287729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.288098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.288128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.288510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.288541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.290528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.290603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.290889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.290926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.291291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.291323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.291711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.291752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.292129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.292159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.292504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.292536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.292917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.292949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.293303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.293334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.293568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.293602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.293960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.293990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.294334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.294365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.294735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.294768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.295115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.295147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.295536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.295584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.295976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.296009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.296353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.147 [2024-07-25 12:45:06.296384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.147 qpair failed and we were unable to recover it. 00:32:33.147 [2024-07-25 12:45:06.296720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.296753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.297105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.297136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.297377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.297412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.297784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.297817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.298186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.298217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.298586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.298619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.298986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.299020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.299368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.299400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.299769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.299802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.300193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.300226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.300587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.300621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.302866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.302934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.303361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.303397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.303768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.303802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.304174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.304207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.304572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.304604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.304841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.304872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.305233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.305264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.305614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.305646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.306042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.306073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.306312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.306343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.306684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.306717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.307063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.307094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.307322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.307356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.307699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.307732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.308066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.308098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.308444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.308475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.308760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.308795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.309163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.309200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.309561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.309594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.310023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.310054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.310294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.310340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.310714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.310747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.311029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.311060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.311402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.311434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.311788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.311820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.148 [2024-07-25 12:45:06.312200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.148 [2024-07-25 12:45:06.312231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.148 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.312583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.312614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.312984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.313014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.313408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.313439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.313856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.313888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.314228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.314260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.314626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.314658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.315046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.315080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.315422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.315453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.315803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.315837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.316182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.316212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.316585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.316617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.316994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.317026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.317370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.317401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.317752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.317785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.317953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.317986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.318348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.318379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.318622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.318654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.319031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.319063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.319417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.319447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.319822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.319854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.320224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.320257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.320597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.320629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.321006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.321038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.321273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.321303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.321584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.321615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.321971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.322003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.322391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.322421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.322785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.322815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.323164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.323195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.323537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.323586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.323997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.324030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.324375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.324407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.324775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.324812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.325155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.325189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.325435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.325467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.325679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.325711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.149 [2024-07-25 12:45:06.326072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.149 [2024-07-25 12:45:06.326104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.149 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.326465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.326497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.326774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.326805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.327038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.327070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.327422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.327452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.327673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.327704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.328080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.328112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.328497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.328534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.328889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.328921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.329266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.329298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.329626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.329660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.329989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.330019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.330365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.330398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.330730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.330762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.331043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.331074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.331486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.331517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.331891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.331922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.332312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.332343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.332679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.332712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.333094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.333125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.333466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.333497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.333792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.333825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.334180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.334211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.334598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.334637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.335009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.335041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.335407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.335438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.335790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.335822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.336194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.336225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.336520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.336563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.150 [2024-07-25 12:45:06.336855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.150 [2024-07-25 12:45:06.336887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.150 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.337213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.337243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.337614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.337646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.337991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.338021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.338406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.338437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.338688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.338720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.339080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.339111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.339452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.339483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.339702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.339733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.340070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.340102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.340473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.340504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.340833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.340865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.341243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.341274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.341609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.341641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.342032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.342062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.342436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.342468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.342806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.342838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.343226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.343257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.343623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.343661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.344040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.344072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.344441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.344472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.344832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.344863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.345109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.345140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.345482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.345513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.345902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.345933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.346255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.346286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.346660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.346692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.347040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.347070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.347414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.347444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.347783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.347815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.348219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.348250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.348626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.348658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.349011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.349042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.349407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.349439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.349833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.349865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.350200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.350237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.350499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.151 [2024-07-25 12:45:06.350529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.151 qpair failed and we were unable to recover it. 00:32:33.151 [2024-07-25 12:45:06.350891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.350925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.351276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.351307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.351674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.351707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.352052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.352084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.352398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.352429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.352780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.352811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.353157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.353188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.353521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.353574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.353913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.353945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.354278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.354309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.354670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.354702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.355039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.355071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.355422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.355455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.355693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.355727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.356083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.356113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.356450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.356481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.356809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.356841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.357175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.357206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.357367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.357401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.357785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.357816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.358130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.358163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.358360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.358393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.358769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.358801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.359074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.359105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.359447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.359479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.359842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.359881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.360122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.360155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.360489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.360521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.360933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.360964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.361307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.361337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.361588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.361620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.361993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.362024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.362268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.362299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.362669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.362699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.363056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.363086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.363426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.363458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.363834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.152 [2024-07-25 12:45:06.363867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.152 qpair failed and we were unable to recover it. 00:32:33.152 [2024-07-25 12:45:06.364261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.364292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.364532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.364576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.364968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.365000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.365288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.365320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.365733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.365764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.366116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.366146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.366471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.366501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.366890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.366925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.367311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.367345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.367611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.367646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.368003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.368036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.368458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.368489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.368868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.368900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.369270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.369302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.369657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.369690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.370054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.370085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.370433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.370464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.370791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.370823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.371170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.371201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.371611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.371643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.372006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.372056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.372381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.372434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.372767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.372817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.373220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.373271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.373690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.373748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.374140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.374182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.374532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.374583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.374950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.374982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.375344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.375375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.375722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.375771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.376119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.376151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.376509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.376543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.376948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.376981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.377323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.377355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.377675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.377707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.378032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.378063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.153 [2024-07-25 12:45:06.378434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.153 [2024-07-25 12:45:06.378465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.153 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.378715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.378750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.379114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.379147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.379483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.379515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.379945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.379977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.380320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.380351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.380613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.380646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.381043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.381075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.381320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.381351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.381620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.381652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.381995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.382026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.382269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.382299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.382674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.382706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.383051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.383083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.383422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.383452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.384042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.384075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.384464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.384495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.384847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.384878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.385204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.385235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.385647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.385681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.386046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.386083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.386498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.386529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.386905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.386939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.387255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.387285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.387593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.387625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.387978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.388010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.388392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.388423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.388800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.388833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.389105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.389135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.389382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.389413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.389787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.389819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.390057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.390087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.390425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.390455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.390791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.390823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.391184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.391215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.391600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.391633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.392015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.392045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.154 [2024-07-25 12:45:06.392401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.154 [2024-07-25 12:45:06.392433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.154 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.392807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.392840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.393209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.393240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.393583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.393615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.393980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.394012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.394409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.394450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.394729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.394764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.395033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.395064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.395391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.395420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.395645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.395679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.396061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.396091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.396450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.396481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.396873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.396906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.397139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.397170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.397393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.397424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.397781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.397813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.398153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.398184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.398337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.398368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.398699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.398730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.399102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.399132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.399399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.399430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.399737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.399768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.400012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.400043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.400388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.400419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.400665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.400703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.401053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.401083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.401434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.401465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.401829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.401862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.402214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.402246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.402613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.402644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.403022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.403052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.403279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.155 [2024-07-25 12:45:06.403314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.155 qpair failed and we were unable to recover it. 00:32:33.155 [2024-07-25 12:45:06.403603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.403636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.404017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.404049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.404410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.404442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.404692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.404724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.404974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.405005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.405256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.405286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.405659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.405691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.406017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.406049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.406362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.406393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.406739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.406771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.407119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.407149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.407492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.407523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.407830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.407864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.408031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.408064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.408304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.408334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.408691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.408723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.409071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.409101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.409442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.409473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.409810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.409843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.410184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.410222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.410506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.410537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.410918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.410948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.411194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.411225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.411593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.411624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.411999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.412030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.412411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.412446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.412850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.412882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.413237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.413268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.413639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.413672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.414016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.414046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.414270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.414305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.414723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.414755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.415102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.415133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.415511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.415542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.415916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.415948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.416289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.416320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.156 [2024-07-25 12:45:06.416762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.156 [2024-07-25 12:45:06.416793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.156 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.417137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.417168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.417565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.417597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.417952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.417983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.418311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.418342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.418586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.418619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.418982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.419012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.419409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.419439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.419775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.419807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.420125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.420156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.420540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.420582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.420960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.420990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.421326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.421364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.421624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.421655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.421887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.421918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.422255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.422286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.422625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.422657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.422963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.422994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.423339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.423370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.423654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.423685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.424025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.424058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.424412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.424442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.424798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.424829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.425160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.425191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.425463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.425499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.425868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.425900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.426218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.426248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.426612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.426644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.427010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.427042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.427400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.427430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.427670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.427702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.427967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.427997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.428333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.428364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.428687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.428718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.428973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.429004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.429237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.429270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.429621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.429652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.157 [2024-07-25 12:45:06.430020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.157 [2024-07-25 12:45:06.430051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.157 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.430256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.430287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.430622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.430656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.430998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.431028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.431341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.431371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.431597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.431633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.432005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.432037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.432391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.432422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.432636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.432668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.433011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.433042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.433259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.433290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.433633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.433665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.433916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.433947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.434309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.434341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.434481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.434512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.434811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.434844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.435076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.435108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.435387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.435418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.435852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.435884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.436223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.436254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.436592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.436623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.437011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.437042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.437387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.437418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.437821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.437853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.438209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.438239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.438567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.438599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.438999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.439030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.439365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.439398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.439738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.439772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.440140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.440172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.440418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.440449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.440647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.440679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.441029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.441059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.441373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.441404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.441776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.441808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.442202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.442232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.442600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.158 [2024-07-25 12:45:06.442632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.158 qpair failed and we were unable to recover it. 00:32:33.158 [2024-07-25 12:45:06.442946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.442976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.443322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.443352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.443630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.443665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.443994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.444024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.444360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.444390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.444636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.444668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.445042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.445072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.445391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.445422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.445767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.445797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.446143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.446173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.446533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.446575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.446975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.447007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.447396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.447427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.447791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.447824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.448175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.448205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.448624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.448657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.449021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.449052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.449389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.449419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.449780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.449823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.450164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.450194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.450422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.450452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.450804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.450835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.451177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.451208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.451603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.451636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.452014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.452045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.452394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.452425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.452656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.452687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.453049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.453080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.453439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.453470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.453801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.453833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.454208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.454238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.454492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.454522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.159 [2024-07-25 12:45:06.454835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.159 [2024-07-25 12:45:06.454866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.159 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.455233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.455263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.455536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.455588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.455969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.456001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.456328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.456359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.456609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.456642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.457001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.457033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.457376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.457407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.457569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.457604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.457962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.457993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.458345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.458377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.458731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.458762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.459113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.459145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.459373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.459407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.459846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.459878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.460227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.460258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.460664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.460696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.460946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.460979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.461153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.461184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.461573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.461605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.462005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.462036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.462383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.462413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.462659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.462690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.463071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.463101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.463453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.463485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.463758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.463790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.464119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.464150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.464383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.464417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.464780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.464813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.465164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.465195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.465425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.465456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.465841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.465872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.466203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.466234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.466580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.466611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.466973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.467003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.160 qpair failed and we were unable to recover it. 00:32:33.160 [2024-07-25 12:45:06.467370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.160 [2024-07-25 12:45:06.467402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.467652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.467684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.468052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.468083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.468273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.468305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.468611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.468644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.468970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.469000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.469346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.469378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.469667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.469698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.469924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.469958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.470316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.470346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.470702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.470733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.471086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.471118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.471480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.471511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.471879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.471912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.472247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.472278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.472629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.472661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.473018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.473049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.473377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.473408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.473781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.473812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.474128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.474164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.474568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.474600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.474978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.475008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.475249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.475280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.475522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.475565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.475823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.475855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.476128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.476159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.476499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.476530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.476941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.476974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.477195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.477226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.477613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.477645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.478008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.478039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.478316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.478346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.478718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.478749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.479028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.479060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.479408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.479439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.479886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.479918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.480263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.161 [2024-07-25 12:45:06.480295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.161 qpair failed and we were unable to recover it. 00:32:33.161 [2024-07-25 12:45:06.480655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.480686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.481055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.481086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.481430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.481462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.481645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.481678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.482024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.482054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.482297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.482327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.482694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.482727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.483067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.483099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.483447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.483478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.483727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.483762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.484094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.484125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.484340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.484371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.484785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.484818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.485175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.485206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.485527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.485573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.485952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.485983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.486337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.486368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.486716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.486748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.487078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.487110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.487474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.487505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.487884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.487917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.488302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.488333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.488627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.488659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.489025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.489061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.489381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.489413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.489760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.489792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.490145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.490176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.490455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.490486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.490833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.490865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.491224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.491255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.491609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.491641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.492057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.492090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.492379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.492408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.492673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.492705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.493148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.493179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.493405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.493439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.493756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.162 [2024-07-25 12:45:06.493789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.162 qpair failed and we were unable to recover it. 00:32:33.162 [2024-07-25 12:45:06.494139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.494171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.494584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.494615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.494969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.495000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.495359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.495391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.495631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.495664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.496026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.496057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.496410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.496441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.496770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.496801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.497180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.497211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.497576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.497607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.497964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.497995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.498356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.498387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.498770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.498801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.499148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.499185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.499571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.499603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.499835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.499867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.500117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.500147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.500495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.500526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.500844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.500877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.501114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.501146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.501517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.501574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.501923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.501955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.502299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.502330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.502686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.502719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.503045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.503077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.503431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.503464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.503692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.503723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.504097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.504129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.504543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.504589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.504934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.504966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.505312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.505344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.505724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.505756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.506100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.506131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.506480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.506510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.506892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.506924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.507172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.507204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.163 [2024-07-25 12:45:06.507529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.163 [2024-07-25 12:45:06.507594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.163 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.507937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.507969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.508242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.508273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.508668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.508702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.509025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.509056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.509333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.509364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.509696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.509727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.510055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.510085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.510456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.510489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.510834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.510867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.511250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.511281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.511626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.511658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.511998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.512029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.512378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.512408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.512773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.512805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.513159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.513190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.513433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.513464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.513809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.513841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.514179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.514215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.514614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.514647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.514907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.514939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.515283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.515313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.515509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.515540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.515940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.515973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.516312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.516342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.516759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.516791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.517165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.517197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.517588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.517620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.518000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.518032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.518419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.518452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.518714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.518746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.164 qpair failed and we were unable to recover it. 00:32:33.164 [2024-07-25 12:45:06.519114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.164 [2024-07-25 12:45:06.519146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.519494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.519526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.519741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.519773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.520128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.520159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.520511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.520543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.520820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.520851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.521187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.521218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.521578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.521611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.521954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.521986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.522347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.522378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.522764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.522796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.523140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.523170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.523400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.523431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.523679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.523711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.524069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.524111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.524484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.524515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.524755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.524786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.525180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.525210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.525571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.525604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.525976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.526007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.526211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.526242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.526606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.526638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.526989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.527020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.527375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.527406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.527664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.527697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.528033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.528064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.528404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.528435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.528684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.528716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.528997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.529029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.529368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.529400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.529641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.529673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.530025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.530056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.530412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.530444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.530772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.530804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.531157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.531188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.531565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.531597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.531980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.165 [2024-07-25 12:45:06.532012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.165 qpair failed and we were unable to recover it. 00:32:33.165 [2024-07-25 12:45:06.532394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.532424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.532766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.532798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.533136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.533167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.533479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.533511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.533889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.533921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.534213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.534244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.534613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.534645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.535015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.535045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.535381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.535412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.535765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.535797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.536170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.536201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.536542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.536587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.536838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.536869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.537229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.537260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.537595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.537626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.537986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.538016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.538366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.538397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.538743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.538776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.539142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.539179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.539527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.539584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.539939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.539970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.540313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.540345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.540685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.540716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.541091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.541122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.541523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.541564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.541798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.541829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.542170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.542201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.542583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.542615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.542834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.542866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.543224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.543255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.543623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.543676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.544002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.544034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.544431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.544462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.544893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.544925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.545276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.545307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.545622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.545654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.545999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.166 [2024-07-25 12:45:06.546030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.166 qpair failed and we were unable to recover it. 00:32:33.166 [2024-07-25 12:45:06.546372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.546403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.546766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.546798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.547181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.547212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.547584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.547616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.547962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.547994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.548335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.548366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.548616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.548648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.549002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.549033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.549398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.549434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.549788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.549820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.550142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.550173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.550513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.550544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.550907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.550938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.551280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.551311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.551662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.551700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.552026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.552057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.552426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.552458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.552806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.552838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.553213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.553245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.167 [2024-07-25 12:45:06.553583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.167 [2024-07-25 12:45:06.553615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.167 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.553963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.553997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.554329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.554360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.554717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.554748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.555090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.555121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.555468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.555499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.555758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.555789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.556137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.556168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.556538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.556585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.556953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.556984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.557323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.557354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.557700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.557732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.558057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.558088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.558426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.558456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.558725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.558759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.559112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.559143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.559488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.559520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.559915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.559947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.560304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.560336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.560724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.560756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.561110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.561140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.561506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.561536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.561910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.561941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.562295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.562326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.562673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.440 [2024-07-25 12:45:06.562706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.440 qpair failed and we were unable to recover it. 00:32:33.440 [2024-07-25 12:45:06.563044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.563075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.563420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.563450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.563777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.563809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.564163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.564193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.564537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.564579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.564943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.564979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.565329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.565360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.565696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.565728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.566069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.566100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.566450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.566481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.566831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.566864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.567202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.567233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.567559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.567591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.567965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.567997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.568319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.568350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.568732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.568764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.569094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.569125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.569459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.569490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.569832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.569864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.570212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.570243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.570611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.570644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.570994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.571025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.571368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.571399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.571765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.571796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.572143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.572174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.572514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.572545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.572937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.572968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.573334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.573366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.573742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.573774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.574142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.574173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.574530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.574575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.574943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.574975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.575289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.575319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.575708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.575741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.576123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.576154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.576374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.576405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.441 [2024-07-25 12:45:06.576735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.441 [2024-07-25 12:45:06.576766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.441 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.577107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.577138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.577458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.577489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.577856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.577888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.578088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.578120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.578472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.578503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.578899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.578931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.579298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.579329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.579667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.579698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.580076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.580107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.580464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.580496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.580883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.580914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.581276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.581307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.581676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.581708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.582025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.582056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.582396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.582427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.582798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.582830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.583213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.583244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.583594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.583626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.583986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.584018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.584371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.584401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.584767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.584798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.585152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.585184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.585410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.585445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.585683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.585716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.586090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.586122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.586451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.586482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.586826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.586859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.587197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.587228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.587617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.587649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.587998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.588029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.588372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.588402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.588766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.588797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.589128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.589160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.589506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.589536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.589918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.589950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.590331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.442 [2024-07-25 12:45:06.590362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.442 qpair failed and we were unable to recover it. 00:32:33.442 [2024-07-25 12:45:06.590740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.590778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.591162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.591193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.591389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.591420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.591757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.591789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.592132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.592163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.592520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.592558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.592896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.592927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.593311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.593342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.593679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.593711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.594105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.594136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.594360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.594393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.594760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.594792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.595145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.595174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.595522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.595574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.595959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.595990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.596356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.596387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.596727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.596759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.597126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.597157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.597495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.597526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.597870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.597902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.598260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.598292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.598643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.598676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.599039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.599070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.599407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.599438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.599794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.599825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.600145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.600176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.600519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.600562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.600941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.600972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.601365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.601397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.601786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.601819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.602186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.602217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.602571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.602603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.603002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.603033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.603397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.603428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.603771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.603803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.604213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.604244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.443 [2024-07-25 12:45:06.604617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.443 [2024-07-25 12:45:06.604649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.443 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.605032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.605064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.605367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.605398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.605727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.605759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.606026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.606056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.606427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.606465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.606821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.606852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.607236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.607267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.607496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.607527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.607908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.607939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.608278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.608308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.608651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.608683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.609024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.609055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.609430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.609460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.609803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.609835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.610174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.610204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.610448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.610478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.610846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.610879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.611292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.611323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.611698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.611731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.612091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.612122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.612371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.612401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.612606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.612638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.612995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.613026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.613401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.613432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.613765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.613797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.614167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.614197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.614559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.614590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.614951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.614982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.615334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.615365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.615712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.615744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.616085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.616117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.616499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.616536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.616919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.616949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.617288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.617320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.617660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.617694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.618031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.618062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.444 [2024-07-25 12:45:06.618407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.444 [2024-07-25 12:45:06.618438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.444 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.618764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.618797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.619135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.619167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.619513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.619545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.619928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.619960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.620335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.620367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.620638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.620670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.621026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.621059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.621425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.621456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.621722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.621759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.622020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.622052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.622390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.622422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.622759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.622791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.623173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.623203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.623557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.623589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.623954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.623986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.624313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.624344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.624682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.624713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.625122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.625154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.625501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.625532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.625888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.625920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.626258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.626289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.626628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.626660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.627038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.627069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.627392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.627424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.627762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.627793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.628135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.628166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.628564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.628595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.628985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.629016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.629366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.629397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.445 qpair failed and we were unable to recover it. 00:32:33.445 [2024-07-25 12:45:06.629724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.445 [2024-07-25 12:45:06.629757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.630126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.630157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.630507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.630539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.630918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.630950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.631356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.631386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.631725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.631758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.632110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.632147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.632515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.632555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.632817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.632849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.633201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.633232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.633585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.633616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.633971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.634002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.634342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.634373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.634750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.634782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.635101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.635133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.635475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.635505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.635873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.635906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.636284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.636314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.636671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.636703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.637057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.637091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.637461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.637493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.637885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.637917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.638233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.638266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.638606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.638638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.639033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.639065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.639431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.639462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.639728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.639760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.640146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.640177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.640518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.640561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.640930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.640960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.641345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.641375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.641723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.641754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.642098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.642129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.642495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.642532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.642950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.642981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.643256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.643287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.643617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.446 [2024-07-25 12:45:06.643649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.446 qpair failed and we were unable to recover it. 00:32:33.446 [2024-07-25 12:45:06.644000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.644031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.644291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.644322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.644560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.644592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.644949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.644980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.645360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.645391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.645741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.645773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.646120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.646151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.646490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.646522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.646740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.646771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.647146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.647177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.647584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.647633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.647981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.648012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.648377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.648408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.648758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.648789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.649129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.649160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.649501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.649533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.649949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.649980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.650326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.650358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.650737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.650770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.651110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.651141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.651511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.651542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.651947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.651979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.652343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.652374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.652721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.652753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.653108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.653139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.653508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.653540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.653886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.653917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.654269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.654300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.654654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.654686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.655070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.655100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.655436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.655468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.655817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.655849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.656217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.656247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.656506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.656537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.656923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.656954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.657299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.657331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.657671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.447 [2024-07-25 12:45:06.657702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.447 qpair failed and we were unable to recover it. 00:32:33.447 [2024-07-25 12:45:06.658065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.658101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.658441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.658472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.658705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.658738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.659077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.659108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.659367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.659398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.659777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.659807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.660088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.660119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.660522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.660564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.660934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.660964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.661307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.661337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.661686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.661717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.662060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.662091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.662435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.662466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.662810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.662843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.663215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.663246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.663585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.663617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.664006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.664037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.664405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.664436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.664780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.664811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.665220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.665250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.665596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.665627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.665984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.666014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.666380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.666411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.666757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.666790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.667179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.667209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.667540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.667581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.667943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.667973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.668342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.668378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.668773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.668804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.669146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.669176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.669536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.669578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.669946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.669978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.670328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.670358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.670703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.670735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.671075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.671107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.671376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.671407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.448 [2024-07-25 12:45:06.671768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.448 [2024-07-25 12:45:06.671799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.448 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.672208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.672239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.672608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.672640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.672989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.673019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.673402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.673433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.673823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.673855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.674261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.674292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.674627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.674659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.674892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.674926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.675214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.675245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.675588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.675620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.675972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.676003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.676319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.676350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.676708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.676740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.677124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.677155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.677480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.677511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.677904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.677936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.678278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.678309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.678652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.678684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.679077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.679108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.679426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.679457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.679792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.679825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.680166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.680197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.680569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.680601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.680870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.680904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.681240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.681271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.681610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.681642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.682005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.682036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.682386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.682417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.682686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.682719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.683065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.683096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.683449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.683479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.449 [2024-07-25 12:45:06.683824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.449 [2024-07-25 12:45:06.683863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.449 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.684227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.684258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.684597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.684629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.684987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.685018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.685419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.685451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.685790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.685817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.686241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.686266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.686617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.686646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.687027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.687053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.687403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.687430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.687787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.687815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.688284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.688316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.688675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.688707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.688987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.689017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.689384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.689416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.689736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.689768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.690125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.690156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.690495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.690525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.690882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.690914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.691272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.691303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.691655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.691688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.692038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.692070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.692404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.692435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.692790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.692821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.693060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.693091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.693433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.693463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.693784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.693816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.694177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.694208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.694574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.694606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.694995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.695026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.695365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.695396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.695730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.695761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.696112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.696143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.696495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.696526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.696831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.696862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.697207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.697239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.697578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.450 [2024-07-25 12:45:06.697610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.450 qpair failed and we were unable to recover it. 00:32:33.450 [2024-07-25 12:45:06.697963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.697994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.698338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.698370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.698760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.698792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.699027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.699058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.699436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.699467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.699798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.699831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.700170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.700201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.700423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.700453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.700873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.700906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.701184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.701215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.701440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.701472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.701832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.701864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.702209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.702241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.702580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.702613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.703004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.703035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.703351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.703381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.703727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.703760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.704091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.704123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.704452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.704485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.704877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.704908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.705144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.705176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.705422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.705453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.705805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.705837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.706196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.706228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.706568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.706599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.706930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.706962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.707351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.707381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.707745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.707778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.708153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.708185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.708438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.708470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.708742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.708775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.708926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.708962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.709236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.709267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.709616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.709648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.709909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.709940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.710273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.710303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.710660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.451 [2024-07-25 12:45:06.710692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.451 qpair failed and we were unable to recover it. 00:32:33.451 [2024-07-25 12:45:06.711035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.711067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.711314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.711345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.711700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.711733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.711995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.712026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.712366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.712397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.712726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.712757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.712987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.713020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.713435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.713466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.713701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.713734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.714119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.714150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.714469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.714500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.714898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.714932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.715288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.715319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.715703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.715735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.716126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.716158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.716383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.716414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.716783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.716816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.717157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.717188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.717540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.717584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.717960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.717991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.718372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.718402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.718769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.718801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.719050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.719083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.719438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.719469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.719693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.719729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.720096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.720126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.720304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.720339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.720751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.720782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.721133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.721164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.721541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.721584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.721955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.721986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.722303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.722334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.722694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.722726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.723084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.723114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.723484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.723515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.723776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.723809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.452 [2024-07-25 12:45:06.724150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.452 [2024-07-25 12:45:06.724182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.452 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.724589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.724622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.724858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.724892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.725232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.725263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.725616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.725648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.726020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.726051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.726369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.726400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.726765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.726796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.727040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.727071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.727456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.727487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.727736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.727768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.728126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.728156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.728369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.728401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.728810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.728843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.729161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.729192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.729540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.729583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.729932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.729963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.730207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.730238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.730580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.730612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.730977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.731007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.731333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.731364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.731673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.731705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.732137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.732168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.732510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.732541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.732791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.732822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.733157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.733188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.733429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.733466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.733788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.733821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.734164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.734194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.734574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.734606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.734827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.734861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.735212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.735242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.735582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.735615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.736000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.736030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.736348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.736379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.736719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.736751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.737138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.737171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.737506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.453 [2024-07-25 12:45:06.737536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.453 qpair failed and we were unable to recover it. 00:32:33.453 [2024-07-25 12:45:06.737807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.737839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.738065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.738099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.738476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.738508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.738881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.738913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.739254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.739286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.739629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.739661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.740007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.740037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.740376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.740407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.740801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.740832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.741215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.741246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.741582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.741615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.741973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.742003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.742319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.742350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.742695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.742727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.743088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.743119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.743467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.743498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.743912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.743945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.744269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.744300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.744627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.744660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.745009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.745039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.745398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.745428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.745790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.745822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.746169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.746202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.746507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.746538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.746925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.746957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.747307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.747338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.747680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.747712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.748065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.748096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.748418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.748449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.748797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.748836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.454 [2024-07-25 12:45:06.749239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.454 [2024-07-25 12:45:06.749270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.454 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.749525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.749568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.749821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.749852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.750257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.750288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.750622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.750654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.750914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.750947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.751297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.751328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.751764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.751800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.752173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.752205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.752575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.752608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.753005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.753037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.753383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.753414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.753775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.753806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.754209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.754240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.754594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.754625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.754988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.755019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.755355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.755386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.755730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.755762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.756123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.756154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.756486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.756517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.756895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.756927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.757195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.757227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.757529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.757574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.757936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.757968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.758341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.758372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.758724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.758756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.759077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.759113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.759475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.759506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.759938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.759975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.760354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.760385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.760736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.760768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.761019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.761050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.761386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.761418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.761770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.761802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.762153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.762184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.762527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.762572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.762943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.455 [2024-07-25 12:45:06.762975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.455 qpair failed and we were unable to recover it. 00:32:33.455 [2024-07-25 12:45:06.763309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.763341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.763696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.763729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.764081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.764112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.764482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.764513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.764895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.764926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.765282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.765314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.765653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.765685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.766027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.766058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.766284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.766317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.766697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.766728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.767077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.767108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.767456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.767486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.767827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.767861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.768196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.768227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.768631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.768663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.769046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.769077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.769441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.769472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.770210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.770245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.770572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.770607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.770984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.771015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.771362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.771393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.771730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.771762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.772129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.772159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.772535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.772578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.772951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.772983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.773326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.773358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.773725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.773756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.774094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.774125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.774473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.774503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.774901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.774932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.775312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.775350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.775695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.775727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.776106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.776137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.776462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.776494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.776879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.776910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.777276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.777307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.777693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.456 [2024-07-25 12:45:06.777724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.456 qpair failed and we were unable to recover it. 00:32:33.456 [2024-07-25 12:45:06.778079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.778110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.778490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.778521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.778890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.778922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.779269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.779300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.779649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.779686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.780059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.780089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.780433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.780465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.780821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.780853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.781167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.781198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.781527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.781569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.781936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.781966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.782308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.782338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.782679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.782711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.783073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.783104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.783444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.783475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.783818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.783849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.784111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.784142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.784483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.784514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.784865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.784898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.785246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.785277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.785590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.785628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.785985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.786017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.786368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.786399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.786716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.786748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.787092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.787123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.787481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.787511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.787880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.787913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.788253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.788284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.788627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.788660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.789011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.789042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.789380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.789411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.789754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.789786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.790132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.790163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.790498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.790528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.790901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.790934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.791272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.791302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.457 [2024-07-25 12:45:06.791687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.457 [2024-07-25 12:45:06.791719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.457 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.792090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.792121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.792463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.792494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.792861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.792893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.793238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.793269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.793612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.793643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.794008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.794038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.794386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.794419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.794767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.794798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.795130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.795161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.795509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.795539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.795917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.795948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.796184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.796214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.796652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.796683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.797057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.797088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.797423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.797454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.797773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.797805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.798141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.798173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.798516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.798557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.798933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.798963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.799304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.799334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.799679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.799711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.800104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.800135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.800483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.800514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.800866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.800897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.801241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.801278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.801571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.801603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.801951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.801982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.802329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.802361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.802596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.802627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.803006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.803036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.803374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.803406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.803748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.803780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.804103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.804134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.804479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.804510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.804873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.804905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.805128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.805159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.458 [2024-07-25 12:45:06.805491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.458 [2024-07-25 12:45:06.805522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.458 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.805866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.805898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.806272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.806303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.806533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.806581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.806938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.806969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.807308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.807340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.807684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.807716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.808092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.808122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.808354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.808385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.808737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.808769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.809137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.809169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.809534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.809583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.810004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.810035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.810440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.810471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.810812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.810843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.811196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.811233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.811461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.811492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.811894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.811927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.812293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.812324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.812673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.812705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.813036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.813068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.813411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.813443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.813684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.813721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.814108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.814138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.814496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.814526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.814898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.814931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.815298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.815329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.815718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.815750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.816119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.816150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.816535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.816578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.816948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.816979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.817319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.459 [2024-07-25 12:45:06.817349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.459 qpair failed and we were unable to recover it. 00:32:33.459 [2024-07-25 12:45:06.817689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.817722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.818064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.818094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.818450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.818480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.818864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.818895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.819239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.819270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.819543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.819598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.819940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.819970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.820348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.820379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.820609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.820645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.821009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.821041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.821276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.821309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.821645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.821677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.822106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.822137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.822471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.822503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.822878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.822910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.823227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.823258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.823609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.823640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.824023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.824054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.824403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.824435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.824765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.824797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.825163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.825194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.825535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.825580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.825945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.825976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.826323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.826355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.826700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.826739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.827093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.827123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.827467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.827497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.827762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.827796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.828133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.828164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.828500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.828531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.828860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.828892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.829266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.829297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.829667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.829698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.830038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.830069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.830409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.830440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.830827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.830859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.460 [2024-07-25 12:45:06.831197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.460 [2024-07-25 12:45:06.831228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.460 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.831492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.831523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.831885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.831918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.832255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.832285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.832652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.832684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.833032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.833063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.833318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.833349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.833706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.833738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.834116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.834147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.834487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.834518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.834877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.834910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.835257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.835289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.835655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.835687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.836075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.836105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.836444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.836475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.836832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.836864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.837211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.837243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.837585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.837617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.837987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.838018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.838399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.838431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.838815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.838846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.839217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.839249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.839587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.839619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.839955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.839986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.840306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.840337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.840692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.840724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.841093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.841123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.841482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.841514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.841799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.841830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.842200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.842232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.842601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.842634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.842988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.843019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.843359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.843390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.843759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.843792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.844156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.844188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.844575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.844606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.844981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.845011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.461 [2024-07-25 12:45:06.845347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.461 [2024-07-25 12:45:06.845379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.461 qpair failed and we were unable to recover it. 00:32:33.462 [2024-07-25 12:45:06.845725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.462 [2024-07-25 12:45:06.845756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.462 qpair failed and we were unable to recover it. 00:32:33.462 [2024-07-25 12:45:06.846102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.462 [2024-07-25 12:45:06.846132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.462 qpair failed and we were unable to recover it. 00:32:33.462 [2024-07-25 12:45:06.846476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.462 [2024-07-25 12:45:06.846508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.462 qpair failed and we were unable to recover it. 00:32:33.462 [2024-07-25 12:45:06.846875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.462 [2024-07-25 12:45:06.846908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.462 qpair failed and we were unable to recover it. 00:32:33.462 [2024-07-25 12:45:06.847249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.462 [2024-07-25 12:45:06.847280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.462 qpair failed and we were unable to recover it. 00:32:33.462 [2024-07-25 12:45:06.847640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.462 [2024-07-25 12:45:06.847672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.462 qpair failed and we were unable to recover it. 00:32:33.462 [2024-07-25 12:45:06.848020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.462 [2024-07-25 12:45:06.848051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.462 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.848418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.848453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.848802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.848834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.849219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.849250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.849481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.849514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.849902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.849935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.850271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.850302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.850652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.850685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.850942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.850975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.851316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.851348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.851690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.851722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.852064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.852096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.852469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.852506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.852832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.852864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.853193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.853224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.853458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.853489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.853860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.853893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.854254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.854286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.854615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.854648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.855026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.855056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.855419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.855450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.855790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.855822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.856195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.856227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.856579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.856612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.856991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.857021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.857360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.857391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.857738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.857769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.736 qpair failed and we were unable to recover it. 00:32:33.736 [2024-07-25 12:45:06.858118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.736 [2024-07-25 12:45:06.858149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.858515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.858558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.858896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.858927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.859296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.859327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.859679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.859712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.860104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.860135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.860513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.860545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.860915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.860946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.861327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.861360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.861700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.861732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.862097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.862127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.862497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.862529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.862927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.862960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.863345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.863377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.863747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.863780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.864148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.864178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.864530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.864573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.864942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.864973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.865300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.865331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.865741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.865772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.866155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.866185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.866558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.866591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.866952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.866984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.867318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.867350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.867686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.867719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.868093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.868126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.868503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.868541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.868900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.868931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.869251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.869281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.869613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.869644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.869993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.737 [2024-07-25 12:45:06.870024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.737 qpair failed and we were unable to recover it. 00:32:33.737 [2024-07-25 12:45:06.870362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.870393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.870741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.870772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.871122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.871152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.871386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.871418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.871760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.871793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.872149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.872181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.872526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.872569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.872935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.872966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.873303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.873334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.873686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.873719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.874082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.874114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.874477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.874511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.874876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.874908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.875180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.875210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.875565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.875597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.875825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.875858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.876230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.876261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.876649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.876682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.877049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.877079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.877424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.877456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.877792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.877827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.878170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.878200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.878538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.878587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.878945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.878977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.879319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.879350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.879692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.879725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.880106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.880137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.880543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.880588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.880926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.880958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.881302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.881333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.881598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.881635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.881975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.738 [2024-07-25 12:45:06.882005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.738 qpair failed and we were unable to recover it. 00:32:33.738 [2024-07-25 12:45:06.882348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.882379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.882627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.882659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.883026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.883058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.883425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.883455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.883805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.883838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.884222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.884253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.884598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.884630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.885008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.885039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.885417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.885448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.885835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.885866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.886237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.886267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.886660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.886691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.887042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.887073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.887326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.887359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.887687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.887719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.888079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.888110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.888453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.888484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.888761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.888793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.889189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.889221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.889587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.889619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.889943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.889975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.890223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.890254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.890615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.890647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.890989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.891020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.891385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.891416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.891758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.891790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.892139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.892170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.892511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.892541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.892915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.892948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.893331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.893362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.893707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.893740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.894078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.894116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.894485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.894517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.894919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.894951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.739 [2024-07-25 12:45:06.895315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.739 [2024-07-25 12:45:06.895347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.739 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.895690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.895722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.896091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.896123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.896463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.896493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.896860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.896892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.897208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.897239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.897616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.897649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.898007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.898038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.898361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.898392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.898656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.898687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.899028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.899060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.899447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.899478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.899893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.899924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.900261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.900292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.900623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.900657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.900920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.900955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.901289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.901320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.901657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.901689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.902007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.902039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.902372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.902403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.902710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.902741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.903086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.903117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.903455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.903486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.903826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.903857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.904244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.904282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.904664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.904696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.905035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.905065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.905404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.905435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.905765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.905798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.906038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.906070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.906439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.906470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.906818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.906850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.907207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.907238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.907589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.907621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.907977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.908009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.908380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.908411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.908757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.740 [2024-07-25 12:45:06.908790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.740 qpair failed and we were unable to recover it. 00:32:33.740 [2024-07-25 12:45:06.909051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.909082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.909495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.909526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.909877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.909909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.910253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.910284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.910629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.910662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.911009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.911040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.911423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.911454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.911770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.911803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.912173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.912204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.912557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.912588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.912942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.912972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.913358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.913389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.913603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.913634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.914019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.914051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.914421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.914452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.914842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.914875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.915217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.915248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.915581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.915613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.915983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.916015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.916363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.916396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.916758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.916791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.917158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.917189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.917576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.917609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.917884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.917916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.918252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.918282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.918659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.918691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.919045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.919076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.919460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.741 [2024-07-25 12:45:06.919491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.741 qpair failed and we were unable to recover it. 00:32:33.741 [2024-07-25 12:45:06.919842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.919880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.920257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.920287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.920631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.920662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.921007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.921038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.921376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.921407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.921757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.921789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.922153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.922185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.922537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.922584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.922923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.922954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.923301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.923332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.923672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.923704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.924091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.924121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.924527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.924570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.924923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.924954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.925326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.925358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.925698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.925730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.926095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.926125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.926468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.926498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.926864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.926895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.927244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.927275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.927659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.927691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.928035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.928066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.928404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.928435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.928802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.928833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.929172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.929203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.929542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.929584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.929932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.929963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.930323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.930359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.930746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.930778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.931151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.931181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.931490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.931521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.931790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.931822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.932188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.932219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.932586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.932618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.932974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.933005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.933357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.933388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.933761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.742 [2024-07-25 12:45:06.933792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.742 qpair failed and we were unable to recover it. 00:32:33.742 [2024-07-25 12:45:06.934133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.934163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.934505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.934536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.934949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.934982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.935326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.935357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.935723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.935756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.936126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.936157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.936502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.936533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.936919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.936950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.937318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.937350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.937688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.937721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.938067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.938098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.938473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.938504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.938901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.938933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.939192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.939227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.939622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.939655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 619216 Killed "${NVMF_APP[@]}" "$@" 00:32:33.743 [2024-07-25 12:45:06.940031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.940062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.940427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.940458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:32:33.743 [2024-07-25 12:45:06.940693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.940726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:33.743 [2024-07-25 12:45:06.941110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.941141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:33.743 [2024-07-25 12:45:06.941480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.941510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:33.743 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:33.743 [2024-07-25 12:45:06.941875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.941907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.942141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.942172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.942570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.942602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.942942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.942975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.943352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.943383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.943775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.943807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.944156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.944187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.944560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.944592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.944998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.945029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.945402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.945433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.945808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.945840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.946182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.946213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.946580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.946611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.946940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.743 [2024-07-25 12:45:06.946970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.743 qpair failed and we were unable to recover it. 00:32:33.743 [2024-07-25 12:45:06.947217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.947248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.947592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.947624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.947974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.948005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.948368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.948399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.948657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.948688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.949028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.949059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.949402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.949432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.949809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.949840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.950227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=620157 00:32:33.744 [2024-07-25 12:45:06.950264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 620157 00:32:33.744 [2024-07-25 12:45:06.950680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.950713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:33.744 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 620157 ']' 00:32:33.744 [2024-07-25 12:45:06.951046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.951076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.744 [2024-07-25 12:45:06.951409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.951440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:33.744 [2024-07-25 12:45:06.951582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.951615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.951757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.951791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.744 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:33.744 [2024-07-25 12:45:06.952020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.952053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 12:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:33.744 [2024-07-25 12:45:06.952414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.952446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.952774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.952806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.953155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.953186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.953541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.953585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.953966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.953996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.954346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.954376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.954724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.954755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.955103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.955133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.955368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.955400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.955663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.955695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.956080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.956110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.956349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.956382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.956729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.956761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.957107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.957139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.957366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.957397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.957776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.957808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.958202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.744 [2024-07-25 12:45:06.958234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.744 qpair failed and we were unable to recover it. 00:32:33.744 [2024-07-25 12:45:06.958581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.958613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.958991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.959022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.959263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.959294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.959626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.959658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.959990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.960021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.960359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.960390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.960728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.960760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.961131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.961162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.961389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.961422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.961786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.961817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.962166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.962196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.962541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.962584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.962972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.963009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.963341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.963372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.963562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.963594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.963864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.963894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.964254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.964286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.964625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.964658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.965007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.965038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.965267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.965299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.965735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.965766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.966150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.966181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.966520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.966562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.966962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.966994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.967233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.967265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.967617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.967653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.967996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.968028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.968390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.968421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.968664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.968697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.969134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.969166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.969514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.969545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.969917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.969951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.970300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.970331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.970679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.970711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.970955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.970986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.971340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.971370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.745 [2024-07-25 12:45:06.971738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.745 [2024-07-25 12:45:06.971769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.745 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.972132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.972163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.972408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.972439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.972719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.972758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.973002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.973032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.973404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.973435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.973683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.973718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.974071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.974101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.974450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.974481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.974709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.974742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.975088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.975119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.975478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.975509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.975718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.975750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.976103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.976134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.976545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.976589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.976954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.976989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.977382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.977412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.977843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.977876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.978245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.978276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.978618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.978649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.979008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.979039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.979425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.979456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.979755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.979787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.980160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.980191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.980520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.980573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.980996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.981028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.981417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.981450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.981647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.981680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.982035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.982068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.982322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.982355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.982788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.746 [2024-07-25 12:45:06.982821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.746 qpair failed and we were unable to recover it. 00:32:33.746 [2024-07-25 12:45:06.983231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.983264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.983506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.983538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.983936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.983967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.984108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.984142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.984379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.984413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.984789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.984822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.985200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.985232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.985589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.985622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.986007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.986038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.986395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.986427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.986595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.986626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.987015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.987045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.987412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.987444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.987611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.987653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.988027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.988059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.988379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.988410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.988670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.988701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.989066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.989097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.989447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.989478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.989839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.989872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.990223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.990254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.990635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.990667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.991026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.991058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.991446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.991477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.991714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.991745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.992132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.992162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.992508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.992540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.992734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.747 [2024-07-25 12:45:06.992766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67e30 with addr=10.0.0.2, port=4420 00:32:33.747 qpair failed and we were unable to recover it. 00:32:33.747 [2024-07-25 12:45:06.993028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd65bd0 is same with the state(5) to be set 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Write completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Write completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Write completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Write completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Write completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Write completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Write completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Read completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.747 Write completed with error (sct=0, sc=8) 00:32:33.747 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 [2024-07-25 12:45:06.993974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:33.748 [2024-07-25 12:45:06.994421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.994467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf4c000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:06.994987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.995086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf4c000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:06.995480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.995517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf4c000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Read completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 Write completed with error (sct=0, sc=8) 00:32:33.748 starting I/O failed 00:32:33.748 [2024-07-25 12:45:06.995951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:33.748 [2024-07-25 12:45:06.996235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.996261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:06.996798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.996866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:06.997274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.997294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:06.997517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.997536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:06.997910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.997978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:06.998143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.998165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:06.998504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.998522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:06.998944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.998963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:06.999290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.999321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:06.999796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:06.999863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:07.000238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:07.000259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:07.000487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:07.000506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:07.000873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:07.000892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:07.001227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:07.001246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:07.001589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:07.001606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:07.001914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:07.001932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:07.002275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:07.002293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:07.002632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:07.002649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:07.002989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:07.003007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.748 qpair failed and we were unable to recover it. 00:32:33.748 [2024-07-25 12:45:07.003199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.748 [2024-07-25 12:45:07.003216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.003586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.003604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.003964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.003982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.004283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.004300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.004616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.004634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.004973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.004990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.005350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.005367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.005712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.005730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.005752] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:32:33.749 [2024-07-25 12:45:07.005807] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.749 [2024-07-25 12:45:07.006078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.006095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.006426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.006443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.006681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.006699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.007001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.007018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.007351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.007368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.007571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.007588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.007903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.007920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.008294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.008311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.008648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.008665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.009007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.009024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.009353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.009370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.009705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.009722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.010055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.010072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.010276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.010298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.010495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.010512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.010856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.010875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.010982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.010999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.011342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.011359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.011703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.011721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.012057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.012074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.012405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.012426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.012679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.012696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.013040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.013057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.013372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.013390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.013510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.013527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.013859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.013877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.014214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.014231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.749 [2024-07-25 12:45:07.014442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.749 [2024-07-25 12:45:07.014459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.749 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.014755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.014773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.015110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.015126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.015467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.015484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.015819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.015836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.016156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.016173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.016499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.016517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.016784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.016802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.017121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.017138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.017477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.017494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.017681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.017700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.017894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.017911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.018178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.018195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.018492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.018509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.018740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.018758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.019100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.019117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.019415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.019432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.019635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.019653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.019982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.019999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.020349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.020366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.020692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.020710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.021035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.021051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.021384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.021401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.021724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.021742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.022060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.022077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.022419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.022436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.022775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.022792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.023124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.023141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.023472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.023489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.023818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.023835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.024163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.024180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.024506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.024523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.024847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.024864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.025068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.025089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.025421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.025438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.025773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.025790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.750 qpair failed and we were unable to recover it. 00:32:33.750 [2024-07-25 12:45:07.026104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.750 [2024-07-25 12:45:07.026121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.026301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.026317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.026650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.026668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.026994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.027010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.027201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.027220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.027572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.027590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.027809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.027825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.028159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.028176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.028386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.028405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.028714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.028731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.028920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.028938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.029272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.029289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.029622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.029640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.029977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.029994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.030192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.030208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.030545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.030570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.030881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.030898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.031221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.031238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.031582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.031600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.031944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.031961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.032282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.032299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.032501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.032519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.032852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.032870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.033077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.033094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.033415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.033432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.033651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.033669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.033950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.033967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.034221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.034238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.034428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.034445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.034654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.034671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.751 [2024-07-25 12:45:07.035021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.751 [2024-07-25 12:45:07.035039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.751 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.035250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.035267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.035561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.035579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.035795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.035813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.036143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.036161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.036485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.036502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.036841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.036859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.037184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.037205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.037525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.037541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.037791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.037809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.038143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.038159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.038479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.038496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.038707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.038725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.039102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.039119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.039447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.039464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.039791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.039809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.040021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.040039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.040374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.040391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.040721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.040738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.040937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.040953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.041152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.041169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.041388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.041405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.041776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.041794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.042118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.042135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.042472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.042489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.042700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.042718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.043009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.043026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.043360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.043377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.043707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.043725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.043945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.043963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.044298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.044315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 EAL: No free 2048 kB hugepages reported on node 1 00:32:33.752 [2024-07-25 12:45:07.044616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.044634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.044972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.044989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.045219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.045235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.045586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.045603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.045949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.045965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.752 [2024-07-25 12:45:07.046296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.752 [2024-07-25 12:45:07.046312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.752 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.046653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.046670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.046879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.046896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.047124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.047142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.047448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.047465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.047794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.047812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.047992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.048010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.048366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.048384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.048592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.048610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.048885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.048902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.049137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.049154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.049514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.049535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.049869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.049887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.050198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.050215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.050516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.050533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.050957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.050974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.051302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.051320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.051617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.051634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.051965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.051982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.052318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.052334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.052565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.052583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.052913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.052930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.053262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.053279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.053508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.053525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.053893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.053910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.054237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.054253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.054623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.054641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.054988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.055005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.055327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.055344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.055675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.055693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.055999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.056016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.056358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.056375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.056584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.056603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.056886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.056903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.057236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.057252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.057613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.057631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.057964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.753 [2024-07-25 12:45:07.057981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.753 qpair failed and we were unable to recover it. 00:32:33.753 [2024-07-25 12:45:07.058148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.058166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.058496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.058514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.058735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.058753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.059080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.059097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.059425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.059441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.059766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.059784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.060107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.060125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.060418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.060436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.060658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.060676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.060920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.060937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.061271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.061288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.061493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.061511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.061833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.061851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.062062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.062080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.062270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.062295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.062639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.062657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.062991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.063009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.063344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.063361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.063695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.063712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.064040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.064057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.064305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.064322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.064652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.064670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.064868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.064885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.065228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.065245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.065579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.065597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.065925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.065942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.066269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.066286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.066456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.066474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.066825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.066842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.067058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.067074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.067405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.067422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.067721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.067739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.068062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.068080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.068406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.068423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.068722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.068739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.069065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.754 [2024-07-25 12:45:07.069082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.754 qpair failed and we were unable to recover it. 00:32:33.754 [2024-07-25 12:45:07.069407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.069424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.069722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.069740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.070060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.070077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.070406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.070424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.070622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.070640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.070874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.070891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.071217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.071234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.071411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.071429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.071732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.071751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.071988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.072005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.072289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.072307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.072629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.072647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.072991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.073008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.073328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.073345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.073669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.073686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.073892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.073910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.074246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.074263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.074594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.074612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.074939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.074961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.075281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.075300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.075621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.075639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.075943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.075960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.076284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.076301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.076637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.076654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.076973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.076990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.077202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.077220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.077557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.077575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.077910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.077927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.078142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.078160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.078453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.078470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.078674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.078692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.078978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.078994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.079299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.079316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.079633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.079651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.755 qpair failed and we were unable to recover it. 00:32:33.755 [2024-07-25 12:45:07.079876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.755 [2024-07-25 12:45:07.079893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.080251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.080268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.080474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.080492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.080879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.080897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.081111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.081127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.081453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.081470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.081801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.081819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.082172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.082189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.082481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.082499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.082796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.082813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.083175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.083192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.083518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.083536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.083898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.083917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.084105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.084123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.084430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.084448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.084743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.084760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.085080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.085097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.085421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.085438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.085765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.085782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.086102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.086119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.086318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.086336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.086541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.086568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.086920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.086937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.087149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.087166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.087505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.087526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.087755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.087773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.088099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.088116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.088440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.088457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.088824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.088841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.089175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.089193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.089519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.089537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.089784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.089803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.756 [2024-07-25 12:45:07.090126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.756 [2024-07-25 12:45:07.090143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.756 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.090469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.090486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.090795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.090813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.091144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.091161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.091486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.091503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.091837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.091855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.092054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.092071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.092400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.092417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.092719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.092737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.093064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.093081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.093409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.093426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.093768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.093785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.094120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.094138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.094455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.094473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.094812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.094829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.095148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.095165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.095486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.095503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.095712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.095730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.096069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.096087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.096293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.096310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.096645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.096662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.097003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.097021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.097362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.097379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.097606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.097623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.097853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.097871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.098208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.098226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.098557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.098574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.098886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.098903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.099246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.099263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.099584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.099602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.099945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.099962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.100169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.100187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.100512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.100532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.100766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.100785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.101011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.101029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.101245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.101264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.757 [2024-07-25 12:45:07.101596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.757 [2024-07-25 12:45:07.101614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.757 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.101934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.101951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.102277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.102294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.102490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.102507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.102808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.102826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.103129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.103145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.103480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.103498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.103789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.103808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.104144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.104162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.104485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.104503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.104829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.104847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.105186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.105203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.105494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.105511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.105840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.105858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.106179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.106198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.106516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.106533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.106859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.106878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.107199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.107215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.107538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.107562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.107889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.107906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.108135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.108152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.108501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.108518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.108881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.108900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.109257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.109276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.109577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.109594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.109941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.109958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.110276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.110293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.110500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.110522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.110840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.110858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.111191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.111208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.111421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.111438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.111753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.111770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.112100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.112117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.112441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.112458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.112782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.112800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.113126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.113142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.758 [2024-07-25 12:45:07.113467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.758 [2024-07-25 12:45:07.113487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.758 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.113788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.113806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.114133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.114150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.114473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.114490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.114819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.114836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.115058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.115075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.115379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.115397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.115718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.115736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.116064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.116082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.116425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.116443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.116754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.116771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.117110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.117127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.117486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.117503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.117844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.117862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.118178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.118196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.118422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.118439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.118790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.118807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.119131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.119149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.119372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.119389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.119610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.119629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.119978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.119996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.120187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.120206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.120565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.120583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.120872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.120890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.121104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.121122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.121444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.121462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.121792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.121810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.122128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.122145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.122357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.122373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.122710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.122728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.123061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.123078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.123287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.123306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.123655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.123673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.124010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.124027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.124356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.124372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.124703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.124721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.124926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.759 [2024-07-25 12:45:07.124943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.759 qpair failed and we were unable to recover it. 00:32:33.759 [2024-07-25 12:45:07.125268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.125286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.125610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.125627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.125954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.125972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.126303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.126326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.126633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.126650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.126984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.127001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.127186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.127204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.127534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.127575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.127902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.127919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.128208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.128225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.128451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.128468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.128711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.128729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.129102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.129119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.129410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.129428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.129652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.129670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.130002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.130019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.130343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.130360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.130540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.130566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.130902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.130919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.131275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.131293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.131619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.131636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.131956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.131974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.132180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.132197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.132507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.132524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.132853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.132870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.133190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.133208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.133417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.133435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.133719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.133736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.134063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.134080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.134270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.134288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.134606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.760 [2024-07-25 12:45:07.134623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.760 qpair failed and we were unable to recover it. 00:32:33.760 [2024-07-25 12:45:07.134952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.134970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.135287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.135304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.135623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.135642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.135743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.135760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.136036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.136054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.136268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.136286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.136621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.136639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.136985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.137002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.137192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.137209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.137521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.137538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.137902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.137922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.138243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.138260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.138614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.138635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.138813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.138831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.139009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.139027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.139354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.139371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.139663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.139680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.140033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.140051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.140394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.140411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.140619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.140638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.140980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.140998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.141321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.141338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.141668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.141686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.142023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.142040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.142364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.142381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:33.761 [2024-07-25 12:45:07.142705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.761 [2024-07-25 12:45:07.142723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:33.761 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.143053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.143075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.143406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.143426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.143758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.143775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.144062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.144080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.144376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.144393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.144720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.144739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.145058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.145075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.145402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.145419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.145717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.145736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.146064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.146082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.146292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.146309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.146609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.146627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.146939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.146957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.147288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.147306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.147488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.147505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.147851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.147869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.148241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.148258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.148614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.148632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.148666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:34.038 [2024-07-25 12:45:07.148962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.148980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.149310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.149327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.149522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.149540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.149871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.149889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.150219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.150236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.150415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.150432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.150571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.150590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.150800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.150818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.150961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.150979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.151219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.151235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.151560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.151578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.151895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.151913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.152250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.152268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.152593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.152610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.152945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.152963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.153307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.038 [2024-07-25 12:45:07.153324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.038 qpair failed and we were unable to recover it. 00:32:34.038 [2024-07-25 12:45:07.153650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.153667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.153995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.154012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.154332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.154349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.154678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.154699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.154906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.154926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.155283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.155307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.155636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.155654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.155866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.155885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.156203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.156221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.156535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.156561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.156891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.156909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.157236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.157253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.157434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.157452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.157774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.157794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.158116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.158134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.158450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.158468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.158762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.158781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.158871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.158888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.159210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.159228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.159559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.159577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.159905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.159923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.160140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.160157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.160283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.160301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.160625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.160642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.160968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.160986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.161320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.161339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.161539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.161574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.161932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.161949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.162273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.162291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.162607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.162625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.162958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.162976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.163312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.163331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.163538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.163565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.163907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.163924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.164127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.164144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.164367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.164386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.039 qpair failed and we were unable to recover it. 00:32:34.039 [2024-07-25 12:45:07.164613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.039 [2024-07-25 12:45:07.164631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.164931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.164949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.165320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.165337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.165674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.165692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.166007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.166026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.166372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.166389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.166753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.166772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.166976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.166994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.167332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.167350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.167563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.167584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.167941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.167959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.168167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.168184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.168517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.168534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.168865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.168883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.169208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.169226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.169560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.169578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.169909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.169926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.170255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.170272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.170601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.170620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.170838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.170857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.171058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.171076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.171421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.171439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.171773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.171791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.172130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.172147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.172478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.172496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.172715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.172733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.173062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.173080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.173301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.173318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.173526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.173544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.173772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.173790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.174029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.174047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.174162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.174179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.174518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.174536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.174872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.174890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.175104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.175122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.175462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.040 [2024-07-25 12:45:07.175480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.040 qpair failed and we were unable to recover it. 00:32:34.040 [2024-07-25 12:45:07.175805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.175824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.176148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.176165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.176502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.176518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.176825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.176844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.177160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.177178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.177498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.177516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.177832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.177851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.178222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.178240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.178572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.178591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.178816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.178834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.179162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.179179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.179492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.179510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.179823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.179841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.180184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.180201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.180526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.180543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.180870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.180889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.181211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.181229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.181558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.181576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.181942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.181960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.182282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.182299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.182620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.182639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.182848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.182865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.183151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.183169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.183482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.183499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.183729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.183748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.183964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.183982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.184174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.184192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.184521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.184538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.184734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.184752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.185090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.185107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.185334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.185351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.185677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.185695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.185914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.185932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.186155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.186174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.186495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.186512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.186824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.041 [2024-07-25 12:45:07.186842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.041 qpair failed and we were unable to recover it. 00:32:34.041 [2024-07-25 12:45:07.187172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.187190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.187507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.187525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.187900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.187918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.188133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.188151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.188370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.188395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.188774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.188792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.189119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.189136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.189487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.189505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.189845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.189863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.190044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.190063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.190455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.190473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.190812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.190830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.191149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.191167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.191489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.191507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.191851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.191870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.192178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.192196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.192518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.192536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.192864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.192882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.193190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.193207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.193526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.193543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.193722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.193741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.194040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.194058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.194388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.194406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.194724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.194742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.195042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.195060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.195397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.195414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.195719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.195736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.196089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.196107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.196278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.042 [2024-07-25 12:45:07.196296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.042 qpair failed and we were unable to recover it. 00:32:34.042 [2024-07-25 12:45:07.196511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.196529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.196741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.196760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.197121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.197140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.197509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.197526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.197778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.197797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.198027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.198045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.198261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.198279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.198648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.198667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.198995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.199013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.199364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.199381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.199591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.199610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.199847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.199864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.200196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.200214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.200594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.200612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.200820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.200838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.201172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.201193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.201487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.201504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.201832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.201850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.202175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.202193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.202526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.202543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.202887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.202906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.203238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.203255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.203359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.203377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.203709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.203728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.204056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.204073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.204252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.204270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.204479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.204496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.204721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.204739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.205041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.205059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.205388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.205406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.205702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.205720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.206040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.206059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.206387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.206404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.206722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.206740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.207071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.207088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.207405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.043 [2024-07-25 12:45:07.207423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.043 qpair failed and we were unable to recover it. 00:32:34.043 [2024-07-25 12:45:07.207655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.207674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.208009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.208027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.208213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.208232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.208567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.208585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.208815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.208834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.209161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.209178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.209502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.209519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.209852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.209870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.210191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.210209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.210574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.210592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.210935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.210953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.211274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.211292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.211628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.211647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.211983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.212001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.212328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.212345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.212578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.212596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.212943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.212961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.213170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.213188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.213478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.213496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.213804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.213827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.214146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.214164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.214496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.214514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.214817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.214835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.215172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.215189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.215522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.215538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.215762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.215780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.216114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.216131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.216447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.216465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.216817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.216835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.217168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.217186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.217597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.217615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.217717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.217734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.218056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.218075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.218358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.218376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.218689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.218707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.218892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.044 [2024-07-25 12:45:07.218910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.044 qpair failed and we were unable to recover it. 00:32:34.044 [2024-07-25 12:45:07.219222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.219240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.219560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.219579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.219917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.219934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.220120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.220139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.220337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.220356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.220695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.220714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.221038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.221056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.221380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.221398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.221756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.221775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.222097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.222116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.222442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.222459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.222783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.222802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.223138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.223155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.223485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.223502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.223714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.223732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.223988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.224005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.224353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.224370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.224703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.224721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.225055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.225072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.225395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.225413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.225642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.225660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.226010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.226028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.226411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.226430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.226657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.226678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.227004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.227022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.227097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.227114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.227430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.227447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.227566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.227584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.228105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.228213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf44000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.228579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.228623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf44000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.229005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.229024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.229390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.229407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.229722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.229740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.230063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.230082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.230408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.230425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.230652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.230670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.045 [2024-07-25 12:45:07.231026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.045 [2024-07-25 12:45:07.231043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.045 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.231256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.231273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.231509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.231527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.231861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.231879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.232244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.232262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.232472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.232490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.232705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.232724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.233025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.233043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.233380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.233398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.233716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.233734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.234070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.234088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.234428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.234448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.234761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.234780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.234981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.234999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.235323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.235340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.235715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.235733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.236036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.236055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.236376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.236395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.236622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.236641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.236983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.237002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.237305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.237324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.237567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.237587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.237832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.237851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.238063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.238083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.238411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.238430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.238765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.238786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.239097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.239115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.239310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.239342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.239671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.239690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.240020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.240037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.240264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.240281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.240518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.240535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.240926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.240944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.241157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.241174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.241507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.241524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.241754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.241773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.242026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.046 [2024-07-25 12:45:07.242044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.046 qpair failed and we were unable to recover it. 00:32:34.046 [2024-07-25 12:45:07.242388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.242405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.242765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.242783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.243016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.243034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.243285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.243302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.243524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.243543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.243771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.243787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.244136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.244153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.244482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.244499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.244709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.244728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.245009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.245027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.245241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.245258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.245461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.245478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.245787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.245805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.246131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.246148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.246362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.246380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.246582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.246601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.246854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.246870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.247156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.247174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.247502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.247519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.247848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.247867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.248179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.248197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.248413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.248430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.248751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.248769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.249100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.249118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.249427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.249444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.249767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.249784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.250121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.250139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.250460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.250478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.250698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.250717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.250931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.250948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.047 qpair failed and we were unable to recover it. 00:32:34.047 [2024-07-25 12:45:07.251272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.047 [2024-07-25 12:45:07.251297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.251504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.251521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.251731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.251748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.252066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.252084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.252417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.252435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.252765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.252783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.253153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.253171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.253468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.253485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.253806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.253824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.254155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.254173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.254506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.254524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.254886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.254904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.255234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.255252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.255452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.255470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.255664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.255683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.256000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.256018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.256341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.256358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.256680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.256698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.257034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.257051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.257382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.257400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.257807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.257825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.258034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.258052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.258376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.258393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.258577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.258596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.258841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.258859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.259168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.259185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.259393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.259411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.259750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.259768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.260031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.260050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.260259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.260276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.260482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.260500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.260801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.260819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.261021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.261039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.261348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.261365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.261611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.261629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.261958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.261977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.048 [2024-07-25 12:45:07.262237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.048 [2024-07-25 12:45:07.262255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.048 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.262575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.262592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.262878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.262897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.263217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.263235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.263428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.263448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.263627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.263647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.264004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.264022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.264358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.264376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.264571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.264589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.264770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.264789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.265110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.265128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.265312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.265331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.265697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.265715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.265940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.265958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.266296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.266314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.266638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.266656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.266983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.267000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.267353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.267370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.267685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.267703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.268017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.268034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.268355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.268372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.268701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.268719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.269032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.269049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.269369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.269386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.269706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.269724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.270062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.270080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.270391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.270408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.270827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.270845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.271162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.271180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.271508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.271526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.271874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.271893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.272227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.272245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.272566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.272585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.272868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.272885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.273199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.273216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.273485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.273503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.273829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.049 [2024-07-25 12:45:07.273846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.049 qpair failed and we were unable to recover it. 00:32:34.049 [2024-07-25 12:45:07.274166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.274185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.274506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.274523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.274737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.274756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.275098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.275115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.275422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.275440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.275626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.275645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.275965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.275982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.276217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.276238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.276453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.276472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.276806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.276824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.277140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.277157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.277364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.277383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.277601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.277619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.277951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.277968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.278303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.278320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.278663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.278681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.278976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.278993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.279316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.279334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.279647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.279665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.280063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.280080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.280397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.280415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.280613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.280632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.280856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.280874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.281203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.281221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.281424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.281443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.281741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.281759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.282069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.282087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.282409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.282427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.282760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.282778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.283097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.283115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.283437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.283455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.283758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.283776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.284088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.284105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.284436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.284453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.284752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.284770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.284951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.050 [2024-07-25 12:45:07.284970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.050 qpair failed and we were unable to recover it. 00:32:34.050 [2024-07-25 12:45:07.285217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.285236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.285566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.285584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.285826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.285845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.286167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.286185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.286533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.286559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.286876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.286894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.287206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.287224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.287541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.287567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.287757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.287774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.287977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.287995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.288278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.288295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.288623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.288646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.288973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.288989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.289325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.289343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.289669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.289687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.289984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.290001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.290324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.290340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.290662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.290681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.291015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.291033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.291399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.291416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.291595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.291613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.291902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.291919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.292133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.292150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.292466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.292484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.292664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.292682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.293013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.293031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.293229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.293247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.293575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.293593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.293910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.293927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.294122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.294139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.294437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.294455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.294761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.294779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.294975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.294992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.295276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.295294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.295515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.295532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.295882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.295901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.051 qpair failed and we were unable to recover it. 00:32:34.051 [2024-07-25 12:45:07.296216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.051 [2024-07-25 12:45:07.296233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.296442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.296459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.296793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.296811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.297141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.297158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.297475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.297492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.297814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.297831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.298152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.298170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.298488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.298506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.298844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.298861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.299187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.299205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.299416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.299434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.299655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.299673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.299998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.300015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.300297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.300314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.300600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.300618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.300861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.300883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.301208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.301225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.301569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.301588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.301897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.301914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.302229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.302246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.302457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.302476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.302813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.302831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.303152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.303170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.303518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.303535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.303904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.303923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.304246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.304263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.304592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.304610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.304940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.304957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.305291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.305309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.305524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.305542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.052 [2024-07-25 12:45:07.305758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.052 [2024-07-25 12:45:07.305775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.052 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.306107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.306125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.306449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.306466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.306782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.306800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.307118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.307135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.307459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.307477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.307689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.307708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.308042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.308060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.308368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.308386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.308706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.308725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.308914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.308931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.309216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.309235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.309606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.309625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.309948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.309965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.310289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.310306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.310627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.310646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.310993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.311011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.311326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.311344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.311669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.311686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.312016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.312034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.312360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.312377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.312559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.312579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.312797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.312814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.313160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.313177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.313466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.313483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.313668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.313692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.313946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.313963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.314298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.314315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.314634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.314652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.314976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.314996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.315319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.315337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.315680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.315698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.316014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.316032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.316233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.316251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.316584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.316603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.316927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.316945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.317163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.053 [2024-07-25 12:45:07.317181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.053 qpair failed and we were unable to recover it. 00:32:34.053 [2024-07-25 12:45:07.317560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.317577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.317897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.317914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.318231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.318248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.318565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.318583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.318770] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.054 [2024-07-25 12:45:07.318856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.054 [2024-07-25 12:45:07.318884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.054 [2024-07-25 12:45:07.318907] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.054 [2024-07-25 12:45:07.318921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.318928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.054 [2024-07-25 12:45:07.318937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.319233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.319250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.319186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:32:34.054 [2024-07-25 12:45:07.319379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:32:34.054 [2024-07-25 12:45:07.319574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.319591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.319451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:32:34.054 [2024-07-25 12:45:07.319457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:32:34.054 [2024-07-25 12:45:07.319928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.319947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.320280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.320297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.320516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.320533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.320826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.320844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.321043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.321061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.321368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.321386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.321704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.321722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.321935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.321952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.322236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.322254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.322621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.322640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.322939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.322958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.323285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.323302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.323630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.323648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.323981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.323997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.324213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.324231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.324449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.324466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.324815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.324832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.325159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.325177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.325388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.325409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.325727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.325744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.326068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.326085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.326405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.326421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.326768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.326786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.327109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.327126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.327453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.327470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.327807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.054 [2024-07-25 12:45:07.327825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.054 qpair failed and we were unable to recover it. 00:32:34.054 [2024-07-25 12:45:07.328160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.328177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.328384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.328401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.328677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.328695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.329028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.329047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.329376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.329393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.329709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.329727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.330045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.330063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.330403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.330420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.330721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.330739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.330943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.330961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.331247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.331264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.331468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.331487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.331729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.331747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.331993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.332011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.332361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.332380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.332639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.332657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.333047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.333065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.333274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.333292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.333498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.333515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.333883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.333902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.334229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.334247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.334445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.334462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.334799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.334817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.335026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.335044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.335341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.335360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.335568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.335585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.335929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.335947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.336307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.336326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.336645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.336663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.336989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.337008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.337331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.337350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.337673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.337692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.337903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.337925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.338220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.338237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.338418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.338435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.338733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.338753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.055 qpair failed and we were unable to recover it. 00:32:34.055 [2024-07-25 12:45:07.339076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.055 [2024-07-25 12:45:07.339095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.339294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.339313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.339496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.339513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.339742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.339760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.340067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.340088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.340428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.340446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.340764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.340782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.340988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.341006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.341299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.341318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.341624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.341644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.341962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.341981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.342304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.342323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.342560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.342578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.342887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.342905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.343229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.343247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.343606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.343626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.343924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.343941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.344266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.344284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.344597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.344616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.344954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.344973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.345306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.345327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.345644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.345662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.346006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.346025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.346331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.346348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.346565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.346584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.346914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.346934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.347251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.347270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.347591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.347610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.347947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.347966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.348304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.348322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.348646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.348665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.348994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.349012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.349343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.349360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.349687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.349705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.349904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.349925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.350126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.350144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.056 [2024-07-25 12:45:07.350445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.056 [2024-07-25 12:45:07.350467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.056 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.350792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.350810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.351041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.351058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.351386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.351404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.351739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.351759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.351935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.351953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.352151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.352170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.352349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.352366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.352695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.352713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.353032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.353051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.353377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.353394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.353708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.353726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.354052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.354069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.354397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.354414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.354591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.354610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.354831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.354849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.355176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.355195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.355487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.355505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.355832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.355851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.356182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.356199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.356518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.356536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.356846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.356864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.357194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.357211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.357541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.357566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.357772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.357789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.358113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.358131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.358496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.358513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.358843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.358862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.359039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.359056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.359384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.359401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.057 [2024-07-25 12:45:07.359504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.057 [2024-07-25 12:45:07.359521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.057 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.359733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.359750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.360027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.360043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.360261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.360278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.360565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.360583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.360893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.360910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.361198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.361216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.361555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.361574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.361945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.361962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.362191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.362208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.362411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.362433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.362619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.362637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.362921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.362937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.363232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.363250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.363581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.363598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.363921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.363937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.364260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.364278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.364483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.364500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.364830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.364848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.365177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.365194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.365519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.365536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.365753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.365770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.366031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.366049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.366377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.366394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.366599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.366617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.366852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.366869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.367090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.367107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.367442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.367459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.367665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.367683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.368021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.368038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.368365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.368381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.368698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.368716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.369013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.369030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.369240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.369257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.369476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.369492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.369736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.369754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.370087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.058 [2024-07-25 12:45:07.370103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.058 qpair failed and we were unable to recover it. 00:32:34.058 [2024-07-25 12:45:07.370452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.370469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.370776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.370794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.371130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.371146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.371470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.371488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.371668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.371686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.371970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.371987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.372317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.372334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.372516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.372532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.372857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.372874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.373089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.373106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.373393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.373410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.373722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.373739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.374081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.374097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.374274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.374299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.374497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.374514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.374709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.374728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.375055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.375071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.375397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.375414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.375765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.375782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.376100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.376117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.376426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.376443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.376767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.376784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.376985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.377002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.377283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.377301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.377629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.377646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.377842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.377859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.378147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.378165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.378482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.378499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.378705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.378723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.379006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.379023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.379410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.379426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.379758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.379775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.380079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.380095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.380416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.380433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.380751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.380769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.059 [2024-07-25 12:45:07.381093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.059 [2024-07-25 12:45:07.381110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.059 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.381431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.381448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.381749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.381767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.382077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.382095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.382417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.382433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.382609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.382626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.382968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.382985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.383298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.383314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.383636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.383654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.383973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.383990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.384168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.384186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.384617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.384634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.384968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.384985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.385162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.385180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.385477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.385494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.385820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.385838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.386177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.386194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.386520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.386537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.386855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.386875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.387046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.387064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.387308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.387325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.387541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.387573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.387896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.387913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.388118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.388135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.388332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.388349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.388642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.388659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.388946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.388963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.389292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.389309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.389525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.389542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.389850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.389869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.390065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.390083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.390379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.390396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.390605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.390622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.390951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.390969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.391183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.391200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.391539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.391568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.060 [2024-07-25 12:45:07.391815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.060 [2024-07-25 12:45:07.391833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.060 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.392034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.392050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.392234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.392250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.392576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.392593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.392922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.392939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.393258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.393275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.393483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.393500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.393720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.393738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.394070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.394086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.394267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.394285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.394589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.394607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.394934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.394951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.395283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.395301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.395620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.395637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.395851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.395867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.396203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.396219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.396521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.396538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.396847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.396864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.397186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.397203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.397530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.397555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.397759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.397776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.397957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.397974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.398324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.398346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.398556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.398576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.398913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.398931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.399133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.399149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.399441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.399459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.399787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.399804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.400129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.400146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.400482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.400499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.400720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.400737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.400949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.400966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.401292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.401308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.401638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.401656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.402025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.402042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.402362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.402379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.402707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.061 [2024-07-25 12:45:07.402725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.061 qpair failed and we were unable to recover it. 00:32:34.061 [2024-07-25 12:45:07.402936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.402953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.403289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.403306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.403616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.403633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.403810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.403827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.404012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.404029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.404231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.404249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.404542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.404567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.404753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.404771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.405094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.405110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.405449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.405465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.405639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.405657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.405945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.405961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.406291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.406308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.406630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.406646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.406857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.406874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.407169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.407187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.407509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.407525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.407766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.407783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.407956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.407974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.408176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.408193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.408553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.408571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.408743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.408761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.409097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.409114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.409288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.409305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.409646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.409664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.409979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.409999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.410364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.410381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.410559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.410577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.410858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.410875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.411168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.411185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.062 [2024-07-25 12:45:07.411506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.062 [2024-07-25 12:45:07.411523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.062 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.411785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.411802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.411998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.412015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.412313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.412330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.412653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.412670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.413001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.413017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.413338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.413354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.413673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.413690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.413898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.413919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.414249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.414266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.414432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.414449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.414786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.414803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.415121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.415137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.415459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.415475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.415652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.415671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.415859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.415876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.416183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.416200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.416379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.416396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.416600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.416616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.416854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.416871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.417200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.417217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.417539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.417563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.417906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.417924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.418282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.418298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.418500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.418517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.418812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.418829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.419169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.419187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.419514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.419530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.419894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.419911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.420237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.420255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.420585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.420603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.420933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.420949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.421267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.421283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.421620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.421637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.421831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.421849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.422166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.422186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.063 [2024-07-25 12:45:07.422391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.063 [2024-07-25 12:45:07.422407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.063 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.422753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.422770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.423092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.423109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.423291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.423309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.423584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.423601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.423939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.423956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.424276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.424293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.424616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.424634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.424813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.424831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.425115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.425133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.425460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.425477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.425776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.425793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.426118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.426135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.426342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.426359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.426536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.426560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.426840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.426857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.427032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.427050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.427369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.427386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.427631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.427649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.427965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.427982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.428352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.428369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.428699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.428717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.428909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.428926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.429210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.429226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.429552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.429570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.429760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.429778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.430010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.430028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.430347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.430365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.430541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.430577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.430899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.430916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.431241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.431258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.431589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.431607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.431803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.431820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.432167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.432184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.432506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.432523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.432699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.432718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.432947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.064 [2024-07-25 12:45:07.432965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.064 qpair failed and we were unable to recover it. 00:32:34.064 [2024-07-25 12:45:07.433307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.433325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.433655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.433673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.433996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.434014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.434333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.434350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.434687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.434705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.435030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.435047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.435370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.435386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.435719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.435736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.436021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.436039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.436361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.436379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.436706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.436723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.437035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.437052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.437422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.437439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.437763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.437782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.438100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.438117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.438438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.438455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.438700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.438718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.439008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.439026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.439355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.439372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.439743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.439760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.440080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.440097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.440419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.440436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.440758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.440775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.441106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.441122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.441191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.441208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.441380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.441396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.441748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.441766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.442101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.442119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.442405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.442422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.065 [2024-07-25 12:45:07.442765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.065 [2024-07-25 12:45:07.442786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.065 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.443076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.443097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.443421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.443440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.443740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.443758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.443963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.443981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.444190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.444208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.444503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.444521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.444775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.444792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.445124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.445142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.445468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.445487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.445806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.445825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.446148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.446165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.446383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.446401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.446582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.446600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.446959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.446977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.447309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.447326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.447541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.447566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.447775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.447792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.448080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.448098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.448427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.448445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.448650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.448669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.449009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.449027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.449362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.449379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.449702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.449720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.449908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.449927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.450213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.450231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.450570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.450588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.450936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.450954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.343 [2024-07-25 12:45:07.451125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.343 [2024-07-25 12:45:07.451144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.343 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.451356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.451372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.451571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.451589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.451843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.451861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.452152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.452170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.452369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.452387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.452448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.452465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.452682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.452700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.453006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.453023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.453363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.453380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.453588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.453606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.453802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.453819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.454042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.454062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.454172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.454190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.454504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.454520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.454850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.454867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.455078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.455096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.455363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.455380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.455599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.455617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.455908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.455926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.456095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.456113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.456317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.456335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.456645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.456663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.456876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.456893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.457219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.457237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.457570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.457588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.457947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.457964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.458143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.458161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.458458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.458476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.458845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.458863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.459146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.459164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.459372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.459390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.459689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.459707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.460047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.460066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.460282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.460300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.460591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.460609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.460856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.344 [2024-07-25 12:45:07.460874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.344 qpair failed and we were unable to recover it. 00:32:34.344 [2024-07-25 12:45:07.461083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.461101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.461440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.461457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.461584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.461602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.461834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.461851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.462053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.462070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.462383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.462401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.462595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.462613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.462935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.462953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.463290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.463307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.463484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.463501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.463858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.463875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.464069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.464088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.464440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.464458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.464783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.464801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.464986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.465006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.465288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.465309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.465645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.465663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.465879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.465897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.466238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.466255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.466462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.466479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.466663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.466682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.467030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.467048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.467260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.467278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.467614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.467632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.467956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.467974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.468301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.468319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.468624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.468642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.469018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.469035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.469364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.469382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.469706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.469725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.469914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.469932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.470268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.470285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.470625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.470643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.470844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.470861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.471249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.471266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.345 [2024-07-25 12:45:07.471445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.345 [2024-07-25 12:45:07.471463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.345 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.471760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.471777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.471840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.471856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.472020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.472038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.472397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.472415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.472725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.472743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.472832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.472849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.473057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.473074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.473288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.473306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.473710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.473728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.473961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.473978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.474183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.474200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.474400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.474417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.474754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.474772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.475105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.475123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.475450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.475469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.475705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.475722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.475915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.475933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.476328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.476345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.476661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.476679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.476960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.476982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.477310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.477327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.477514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.477531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.477750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.477767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.478075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.478092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.478391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.478409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.478720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.478738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.478947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.478964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.479297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.479314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.479682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.479699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.479843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.479862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.480155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.480173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.480516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.480533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.480861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.480879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.481087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.481106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.481445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.481463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.346 [2024-07-25 12:45:07.481783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.346 [2024-07-25 12:45:07.481801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.346 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.482127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.482145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.482476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.482494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.482794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.482812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.483136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.483154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.483490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.483507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.483834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.483851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.484040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.484058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.484358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.484376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.484557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.484576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.484932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.484949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.485274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.485292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.485491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.485509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.485851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.485868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.486208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.486226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.486561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.486579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.486738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.486757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.487082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.487100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.487288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.487306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.487579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.487597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.487967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.487984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.488280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.488297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.488509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.488526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.488902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.488920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.489138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.489159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.489350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.489367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.489653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.489672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.490023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.490040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.490227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.490244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.490554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.490572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.490777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.490794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.491025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.491041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.491372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.491390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.347 [2024-07-25 12:45:07.491718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.347 [2024-07-25 12:45:07.491736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.347 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.491925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.491942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.492267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.492284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.492491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.492509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.492842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.492860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.493227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.493245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.493424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.493442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.493657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.493675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.493992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.494010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.494337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.494355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.494441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.494460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.494748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.494766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.494948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.494966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.495284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.495301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.495634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.495652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.495863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.495880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.496113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.496130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.496468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.496486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.496669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.496686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.496930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.496947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.497278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.497295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.497466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.497484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.497804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.497823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.497889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.497904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.498197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.498214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.498396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.498413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.498727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.498745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.498935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.498953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.499279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.499297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.499620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.499638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.499968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.499985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.500207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.500229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.348 qpair failed and we were unable to recover it. 00:32:34.348 [2024-07-25 12:45:07.500448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.348 [2024-07-25 12:45:07.500465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.500788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.500806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.501125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.501142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.501471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.501488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.501809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.501827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.502169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.502186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.502514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.502532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.502727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.502745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.503064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.503081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.503414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.503431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.503648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.503666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.504030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.504047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.504355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.504372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.504726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.504744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.504935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.504953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.505163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.505181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.505522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.505541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.505760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.505779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.506111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.506128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.506460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.506478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.506772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.506789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.507116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.507133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.507314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.507332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.507669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.507687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.507888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.507906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.508098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.508116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.508455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.508473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.508658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.508676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.508868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.508885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.509067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.509084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.509184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.509202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.509416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.509433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.509617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.509636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.509855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.509873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.510178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.510195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.510535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.510559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.349 [2024-07-25 12:45:07.510903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.349 [2024-07-25 12:45:07.510921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.349 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.511263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.511281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.511608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.511625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.511964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.511985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.512356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.512374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.512582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.512601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.512815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.512833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.513123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.513141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.513211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.513227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.513533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.513563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.513620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.513636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.514041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.514058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.514245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.514263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.514583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.514600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.514941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.514958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.515294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.515312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.515528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.515545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.515932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.515950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.516263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.516281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.516617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.516636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.516989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.517007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.517220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.517238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.517303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.517318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.517628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.517646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.517843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.517861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.518127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.518145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.518449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.518467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.518799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.518817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.519009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.519028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.519204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.519222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.519586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.519604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.519850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.519868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.520194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.520214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.520563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.520582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.520796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.520814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.521166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.521184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.521389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.350 [2024-07-25 12:45:07.521407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.350 qpair failed and we were unable to recover it. 00:32:34.350 [2024-07-25 12:45:07.521722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.521739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.522064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.522083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.522285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.522303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.522630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.522649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.523013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.523031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.523213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.523232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.523425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.523447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.523646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.523664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.524011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.524031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.524211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.524229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.524564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.524581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.524915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.524932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.525256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.525274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.525601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.525621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.525834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.525852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.526194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.526212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.526426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.526444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.526639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.526657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.526954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.526973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.527303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.527321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.527661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.527678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.527912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.527930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.528257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.528275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.528456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.528473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.528671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.528689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.529032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.529050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.529403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.529426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.529710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.529729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.529955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.529973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.530146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.530164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.530505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.530522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.530749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.530767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.531113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.531130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.531447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.531466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.531640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.531658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.531998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.351 [2024-07-25 12:45:07.532015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.351 qpair failed and we were unable to recover it. 00:32:34.351 [2024-07-25 12:45:07.532196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.532215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.532423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.532442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.532810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.532828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.533010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.533028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.533366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.533384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.533707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.533724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.534056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.534074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.534401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.534418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.534740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.534758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.535086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.535104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.535308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.535329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.535638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.535657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.535861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.535878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.536061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.536078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.536364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.536382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.536703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.536722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.537043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.537060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.537257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.537274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.537601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.537618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.537733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.537751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.537993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.538010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.538350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.538366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.538697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.538715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.539072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.539090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.539263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.539280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.539484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.539502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.539703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.539721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.539960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.539977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.540305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.540322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.540529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.540557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.352 [2024-07-25 12:45:07.540917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.352 [2024-07-25 12:45:07.540936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.352 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.541258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.541276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.541644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.541662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.541725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.541744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.542085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.542102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.542426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.542444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.542782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.542800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.543128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.543146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.543350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.543369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.543721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.543740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.543945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.543962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.544136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.544154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.544479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.544496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.544569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.544585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.544861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.544879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.545108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.545126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.545451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.545469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.545786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.545804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.545863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.545879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.546140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.546157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.546491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.546512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.546686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.546703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.547034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.547052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.547379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.547396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.547728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.547745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.548084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.548100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.548435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.548452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.548761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.548779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.548993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.549011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.549338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.549355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.549679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.549697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.550067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.550084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.550414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.550432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.550766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.550784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.550962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.550980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.551297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.551314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.551622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.353 [2024-07-25 12:45:07.551640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.353 qpair failed and we were unable to recover it. 00:32:34.353 [2024-07-25 12:45:07.551883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.551900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.552108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.552125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.552412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.552429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.552615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.552632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.552876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.552893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.553230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.553248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.553601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.553621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.553824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.553843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.554186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.554204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.554260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.554277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.554344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.554362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.554705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.554722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.554912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.554932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.555128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.555146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.555523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.555540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.555872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.555891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.556213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.556233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.556561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.556580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.556893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.556910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.557250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.557268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.557453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.557470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.557811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.557830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.558150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.558167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.558401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.558422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.558638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.558656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.558996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.559014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.559363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.559380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.559708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.559726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.560071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.560091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.560293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.560310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.560501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.560519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.560817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.560836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.561168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.561186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.561259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.561276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.561562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.561581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.561902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.561920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.354 [2024-07-25 12:45:07.562250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.354 [2024-07-25 12:45:07.562267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.354 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.562590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.562607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.562973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.562991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.563315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.563333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.563539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.563567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.563917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.563936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.564260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.564278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.564460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.564477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.564798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.564816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.565139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.565156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.565486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.565503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.565830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.565848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.566171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.566188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.566512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.566530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.566899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.566918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.567251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.567269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.567474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.567493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.567787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.567805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.568116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.568136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.568476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.568494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.568684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.568704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.569063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.569081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.569177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.569194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.569515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.569532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.569725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.569743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.570065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.570082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.570264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.570282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.570629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.570650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.570971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.570990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.571167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.571185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.571514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.571531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.571751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.571769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.572129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.572146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.572485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.572503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.572679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.572698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.572954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.572972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.355 [2024-07-25 12:45:07.573174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.355 [2024-07-25 12:45:07.573193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.355 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.573533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.573560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.573815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.573833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.574038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.574056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.574390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.574408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.574609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.574628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.574745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.574763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.575076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.575094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.575432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.575449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.575649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.575666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.575854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.575872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.576218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.576238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.576560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.576578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.576919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.576936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.577289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.577307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.577488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.577506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.577708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.577726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.578066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.578083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.578409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.578428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.578761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.578779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.578989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.579007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.579336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.579355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.579533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.579559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.579862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.579879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.580203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.580222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.580558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.580576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.580891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.580911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.581077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.581094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.581306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.581323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.581520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.581537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.581665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.581683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.581871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.581889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.582219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.582237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.582572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.582590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.582906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.582924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.583244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.583261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.583586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.356 [2024-07-25 12:45:07.583604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.356 qpair failed and we were unable to recover it. 00:32:34.356 [2024-07-25 12:45:07.583932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.583949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.584280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.584298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.584625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.584644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.585019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.585037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.585220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.585238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.585520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.585540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.585876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.585896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.586082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.586099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.586301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.586319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.586519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.586537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.586759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.586777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.586915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.586933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.587253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.587270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.587581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.587599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.587940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.587958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.588283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.588302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.588638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.588656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.588995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.589013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.589338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.589356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.589541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.589574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.589922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.589938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.590229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.590250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.590455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.590472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.590716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.590735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.590793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.590808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.591100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.591117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.591316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.591334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.591571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.591590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.591772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.591789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.591965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.591983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.357 [2024-07-25 12:45:07.592286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.357 [2024-07-25 12:45:07.592305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.357 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.592509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.592527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.592805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.592825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.593037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.593054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.593349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.593367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.593692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.593711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.594036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.594053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.594221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.594239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.594576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.594595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.594787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.594805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.595087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.595104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.595312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.595331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.595524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.595541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.595730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.595747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.595943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.595960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.596261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.596279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.596563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.596581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.596874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.596891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.597119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.597137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.597319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.597337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.597639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.597657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.597989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.598006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.598337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.598355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.598678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.598696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.599025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.599042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.599372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.599389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.599567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.599585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.599661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.599677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.599873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.599890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.600214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.600231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.600555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.600572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.600772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.600793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.601127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.601144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.601479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.601497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.601795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.601813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.602020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.602037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.358 [2024-07-25 12:45:07.602368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.358 [2024-07-25 12:45:07.602385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.358 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.602717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.602734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.603063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.603080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.603342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.603360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.603689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.603708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.604025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.604043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.604213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.604230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.604564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.604582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.604906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.604923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.605255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.605272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.605462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.605479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.605671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.605689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.606040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.606057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.606392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.606409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.606595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.606618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.606921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.606938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.607306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.607324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.607652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.607670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.607998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.608019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.608369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.608387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.608454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.608471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.608758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.608776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.609132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.609150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.609479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.609498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.609826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.609845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.610174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.610192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.610516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.610534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.610780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.610799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.611091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.611110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.611321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.611339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.611562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.611580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.611932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.611949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.612270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.612287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.612469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.612489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.612579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.612596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.612901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.612922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.613255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.359 [2024-07-25 12:45:07.613273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.359 qpair failed and we were unable to recover it. 00:32:34.359 [2024-07-25 12:45:07.613450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.613468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.613796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.613813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.614139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.614156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.614486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.614504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.614697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.614715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.615055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.615072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.615400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.615418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.615742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.615760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.615968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.615986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.616310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.616327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.616506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.616524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.616741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.616760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.616943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.616960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.617287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.617305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.617368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.617385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.617562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.617580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.617914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.617931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.618260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.618277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.618486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.618504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.618839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.618857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.619076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.619093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.619436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.619453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.619682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.619700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.620031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.620049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.620264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.620282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.620601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.620619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.620972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.620989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.621277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.621295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.621658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.621676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.621895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.621913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.622091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.622109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.622483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.622500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.622832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.622850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.623060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.623078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.623290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.623307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.623642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.623659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.360 qpair failed and we were unable to recover it. 00:32:34.360 [2024-07-25 12:45:07.623857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.360 [2024-07-25 12:45:07.623874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.624172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.624190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.624517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.624538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.624903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.624922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.625248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.625265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.625595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.625613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.625812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.625830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.626158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.626176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.626383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.626400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.626718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.626735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.627053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.627071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.627393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.627411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.627722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.627741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.628082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.628100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.628424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.628442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.628626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.628644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.628995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.629013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.629195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.629212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.629533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.629558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.629756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.629773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.629999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.630017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.630207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.630226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.630440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.630458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.630793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.630811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.631174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.631191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.631383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.631400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.631585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.631603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.631821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.631838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.632166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.632184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.632387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.632404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.632711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.632728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.633065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.633083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.633386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.633404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.633632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.633648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.633934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.633951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.634159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.634176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.361 [2024-07-25 12:45:07.634373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.361 [2024-07-25 12:45:07.634391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.361 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.634722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.634740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.635065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.635082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.635408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.635426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.635607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.635624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.635963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.635981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.636287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.636308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.636479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.636497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.636824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.636843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.637170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.637187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.637414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.637433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.637735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.637753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.638076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.638093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.638262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.638279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.638620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.638638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.638974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.638992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.639204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.639221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.639554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.639571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.639939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.639957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.640278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.640295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.640618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.640636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.640967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.640985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.641160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.641178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.641479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.641497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.641795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.641813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.642136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.642154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.642485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.642502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.642570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.642586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.642876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.642893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.643221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.643239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.643454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.643472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.362 [2024-07-25 12:45:07.643699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.362 [2024-07-25 12:45:07.643717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.362 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.644039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.644057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.644270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.644288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.644633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.644650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.644837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.644854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.645186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.645204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.645532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.645557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.645905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.645923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.646239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.646256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.646581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.646599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.646926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.646944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.647271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.647287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.647611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.647629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.647832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.647848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.648184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.648201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.648525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.648555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.648904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.648922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.649246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.649263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.649330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.649348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.649586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.649604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.649797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.649814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.650150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.650167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.650502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.650520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.650780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.650798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.651023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.651041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.651366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.651384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.651744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.651762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.651968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.651986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.652292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.652309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.652701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.652719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.653045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.653062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.653374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.653392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.653571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.653589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.653894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.653911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.654080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.654098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.654467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.654485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.363 [2024-07-25 12:45:07.654843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.363 [2024-07-25 12:45:07.654861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.363 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.655173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.655192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.655371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.655388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.655693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.655711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.655790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.655807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.656116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.656133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.656435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.656453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.656654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.656671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.657001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.657019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.657216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.657234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.657581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.657599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.657931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.657947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.658273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.658290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.658623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.658640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.658977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.658994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.659330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.659348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.659682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.659701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.660029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.660046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.660375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.660392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.660767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.660788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.661109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.661126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.661461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.661478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.661685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.661702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.662083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.662101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.662284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.662302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.662365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.662383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.662600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.662618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.662899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.662916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.663279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.663296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.663631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.663650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.663837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.663855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.664140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.664157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.664491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.664508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.664718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.664736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.665086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.665103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.665435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.665452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.665761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.364 [2024-07-25 12:45:07.665780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.364 qpair failed and we were unable to recover it. 00:32:34.364 [2024-07-25 12:45:07.666103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.666120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.666318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.666335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.666572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.666590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.666921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.666938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.667153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.667170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.667465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.667482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.667537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.667562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.667844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.667862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.668200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.668218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.668399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.668417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.668597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.668615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.668961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.668978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.669184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.669201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.669448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.669466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.669802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.669819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.670143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.670161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.670359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.670377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.670712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.670729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.670964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.670982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.671320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.671338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.671701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.671718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.672056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.672073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.672411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.672431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.672608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.672625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.672840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.672858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.673190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.673208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.673532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.673556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.673894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.673911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.674244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.674262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.674435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.674452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.674760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.674778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.674960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.674977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.675233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.675250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.675432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.675449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.675637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.675655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.676018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.365 [2024-07-25 12:45:07.676034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.365 qpair failed and we were unable to recover it. 00:32:34.365 [2024-07-25 12:45:07.676336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.676353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.676566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.676584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.676877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.676894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.677256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.677274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.677607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.677624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.677809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.677827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.678124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.678142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.678317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.678335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.678642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.678661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.679020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.679038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.679370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.679388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.679597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.679615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.679956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.679974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.680301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.680319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.680498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.680517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.680811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.680830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.681001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.681018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.681196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.681214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.681560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.681579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.681817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.681835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.682171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.682189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.682521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.682539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.682866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.682883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.683209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.683226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.683523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.683539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.683765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.683783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.684075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.684097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.684404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.684421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.684483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.684498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.684669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.684688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.685016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.685033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.685381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.685399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.685654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.685671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.685986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.686004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.686316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.686334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.686582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.686599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.366 qpair failed and we were unable to recover it. 00:32:34.366 [2024-07-25 12:45:07.686786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.366 [2024-07-25 12:45:07.686805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.687090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.687109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.687438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.687457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.687798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.687816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.688136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.688154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.688476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.688495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.688830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.688848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.689039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.689058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.689230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.689247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.689588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.689606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.689929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.689947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.690276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.690294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.690509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.690527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.690883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.690901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.691231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.691249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.691436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.691454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.691699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.691718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.692058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.692076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.692282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.692301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.692610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.692629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.692965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.692982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.693356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.693375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.693557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.693575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.693872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.693890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.694209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.694228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.694562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.694580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.694917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.694935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.695256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.695273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.695597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.695615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.695950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.695967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.696141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.367 [2024-07-25 12:45:07.696163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.367 qpair failed and we were unable to recover it. 00:32:34.367 [2024-07-25 12:45:07.696500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.696517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.696868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.696886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.697054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.697072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.697280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.697299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.697617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.697636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.697974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.697993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.698363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.698383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.698724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.698742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.699065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.699082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.699405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.699422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.699722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.699741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.700062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.700079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.700405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.700423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.700760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.700778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.700981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.700999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.701169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.701187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.701494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.701512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.701730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.701748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.701934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.701951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.702024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.702042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.702230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.702249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.702575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.702593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.702917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.702935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.703259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.703276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.703604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.703622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.703821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.703838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.704219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.704236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.704413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.704431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.704631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.704649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.704919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.704937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.705223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.705240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.705474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.705492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.705738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.705757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.706060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.706078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.706400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.706418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.706480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.368 [2024-07-25 12:45:07.706495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.368 qpair failed and we were unable to recover it. 00:32:34.368 [2024-07-25 12:45:07.706676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.706694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.707036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.707053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.707259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.707280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.707474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.707494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.707784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.707803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.708011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.708028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.708377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.708396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.708723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.708740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.709058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.709076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.709414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.709431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.709839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.709869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.710220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.710238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.710451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.710469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.710771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.710789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.711108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.711127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.711304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.711321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.711623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.711641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.711993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.712011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.712193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.712211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.712418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.712435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.712737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.712754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.712966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.712984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.713345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.713362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.713707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.713725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.714125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.714143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.714439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.714458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.714771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.714790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.714998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.715016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.715244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.715262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.715472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.715490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.715814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.715833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.716171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.716190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.716519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.716536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.716800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.716819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.717149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.717166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.717375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.717393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.369 qpair failed and we were unable to recover it. 00:32:34.369 [2024-07-25 12:45:07.717582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.369 [2024-07-25 12:45:07.717600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.717918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.717935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.718267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.718285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.718608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.718627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.718805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.718823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.719121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.719139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.719479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.719496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.719561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.719580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.719647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.719662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.719979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.719996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.720205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.720222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.720555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.720573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.720644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.720659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.720997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.721015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.721349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.721367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.721673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.721691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.721917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.721933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.722150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.722168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.722350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.722368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.722716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.722734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.723061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.723079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.723404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.723420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.723770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.723789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.723979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.723998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.724305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.724322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.724648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.724666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.724999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.725017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.725215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.725232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.725572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.725590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.725928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.725946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.726268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.726286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.726614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.726632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.726830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.726847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.727042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.727060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.727241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.727259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.727561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.727579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.370 [2024-07-25 12:45:07.727891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.370 [2024-07-25 12:45:07.727908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.370 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.728239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.728257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.728583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.728600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.728944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.728961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.729283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.729301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.729612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.729631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.729956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.729974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.730309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.730327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.730513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.730530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.730853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.730871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.731082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.731100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.731419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.731437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.731768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.731786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.731992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.732009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.732302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.732320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.732530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.732567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.732775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.732792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.733082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.733099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.733424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.733442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.733662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.733680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.733971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.733987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.734200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.734218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.734425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.734443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.734795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.734813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.735147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.735166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.735495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.735513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.735795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.735812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.736055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.736073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.736417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.736435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.736610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.736628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.736860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.736878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.737198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.737215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.737557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.737575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.737897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.737914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.738202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.738219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.738523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.738540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.738899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.371 [2024-07-25 12:45:07.738918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.371 qpair failed and we were unable to recover it. 00:32:34.371 [2024-07-25 12:45:07.738986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.739003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.739283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.739305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.739489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.739506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.739835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.739856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.740146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.740163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.740493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.740511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.740850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.740869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.741077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.741094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.741302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.741320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.741619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.741638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.742000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.742018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.742336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.742354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.742678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.742696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.742884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.742902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.743243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.743261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.743489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.743507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.743840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.743858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.743950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.743967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.744243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.744261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.744473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.744490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.744715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.744732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.744983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.745002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.745292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.745310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.745493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.745512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.745696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.745714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.745952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.745970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.746294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.746311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.746663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.746681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.747014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.747033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.372 [2024-07-25 12:45:07.747361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.372 [2024-07-25 12:45:07.747379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.372 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.747718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.747739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.747947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.747968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.748151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.748169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.748237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.748253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.748529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.748554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.748877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.748894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.749068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.749086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.749438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.749455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.749769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.749787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.749956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.749975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.750263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.750281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.750608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.750631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.750872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.750890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.751212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.751231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.751572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.751590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.751924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.751942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.752063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.752081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.752403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.752421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.752755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.641 [2024-07-25 12:45:07.752773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.641 qpair failed and we were unable to recover it. 00:32:34.641 [2024-07-25 12:45:07.753095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.753113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.753301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.753319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.753612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.753630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.753825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.753844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.754087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.754106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.754279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.754297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.754605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.754623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.754830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.754849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.755096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.755113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.755459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.755476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.755657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.755676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.755904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.755922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.756225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.756243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.756568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.756586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.756790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.756807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.757136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.757154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.757479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.757497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.757839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.757857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.758109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.758127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.758480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.758499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.758710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.758730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.759030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.759048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.759234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.759251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.759490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.759507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.759837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.759857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.760190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.760207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.760557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.760575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.760896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.760914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.761247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.761266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.761591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.761609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.761955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.761972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.762189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.762207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.762387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.762408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.762616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.762635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.762808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.762825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.763018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.763036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.763342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.763360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.763710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.763729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.764021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.764040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.764249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.764267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.764464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.764481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.764825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.764843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.765161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.765180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.765506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.765523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.765860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.765879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.766206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.766225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.766559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.766578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.766906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.766924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.767141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.767160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.767359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.767377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.767729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.767748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.768077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.768095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.768294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.768312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.768637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.768655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.768983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.769001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.642 [2024-07-25 12:45:07.769334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.642 [2024-07-25 12:45:07.769351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.642 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.769648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.769666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.769913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.769931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.770260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.770278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.770468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.770486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.770914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.770932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.771265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.771285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.771454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.771473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.771653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.771670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.772004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.772022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.772232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.772252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.772472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.772489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.772714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.772732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.772916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.772934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.773258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.773276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.773565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.773583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.773798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.773816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.774031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.774052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.774232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.774250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.774574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.774591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.774931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.774948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.775270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.775289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.775615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.775633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.775957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.775975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.776163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.776181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.776533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.776559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.776641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.776658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.776945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.776962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.777150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.777167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.777494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.777512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.777851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.777869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.778081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.778099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.778317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.778334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.778568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.778585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.778931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.778949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.779277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.779295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.779616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.779634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.779961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.779979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.780352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.780369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.780543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.780569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.780873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.780890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.781100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.781117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.781448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.781466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.781822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.781839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.782032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.782051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.782408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.782426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.782653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.782671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.783038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.783056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.783234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.783252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.783577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.783595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.783775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.783792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.784128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.784145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.784467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.784485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.784808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.784826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.785033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.785050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.785245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.785263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.785610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.785629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.643 qpair failed and we were unable to recover it. 00:32:34.643 [2024-07-25 12:45:07.785967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.643 [2024-07-25 12:45:07.785991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.786308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.786326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.786657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.786676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.787000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.787017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.787350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.787368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.787654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.787672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.788036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.788053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.788381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.788398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.788734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.788752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.789081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.789099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.789429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.789446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.789766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.789783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.789958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.789977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.790074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.790091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.790155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.790171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.790500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.790517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.790851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.790869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.791186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.791204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.791385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.791403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.791611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.791629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.791945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.791961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.792193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.792210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.792385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.792402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.792687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.792705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.792999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.793016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.793197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.793216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.793605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.793623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.793980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.793997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.794338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.794355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.794536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.794561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.794816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.794834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.795129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.795147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.795330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.795349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.795544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.795572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.795917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.795934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.796258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.796276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.796632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.796650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.796853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.796871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.797220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.797239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.797562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.797580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.797867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.797888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.798216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.798234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.798410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.798428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.798761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.798778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.798842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.798858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.799076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.799094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.799439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.799457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.799751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.799769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.800100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.800119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.800447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.800466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.800664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.800683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.800852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.800870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.644 [2024-07-25 12:45:07.801061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.644 [2024-07-25 12:45:07.801079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.644 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.801138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.801156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.801423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.801442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.801797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.801815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.802110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.802127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.802471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.802489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.802793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.802811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.803141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.803160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.803368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.803387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.803719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.803738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.804050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.804067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.804389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.804407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.804629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.804648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.804976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.804996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.805318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.805335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.805668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.805687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.805996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.806015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.806340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.806360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.806686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.806703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.807033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.807051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.807373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.807390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.807591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.807610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.807980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.807999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.808211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.808229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.808468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.808488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.808687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.808706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.808882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.808901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.809238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.809257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.809543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.809573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.809922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.809941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.810141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.810158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.810358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.810376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.810658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.810676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.811012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.811030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.811351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.811369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.811697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.811715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.811892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.811911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.812089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.812107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.812439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.812458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.812670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.812690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.812900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.812919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.813258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.813278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.813456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.813473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.813766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.813785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.813955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.813973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.814274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.814293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.814643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.814662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.814986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.815005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.815357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.815375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.815704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.815722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.815910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.815929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.816268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.816286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.816645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.816664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.817009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.817028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.817206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.817224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.817541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.817571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.817896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.817916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.818110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.818128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.818430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.818450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.818658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.645 [2024-07-25 12:45:07.818677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.645 qpair failed and we were unable to recover it. 00:32:34.645 [2024-07-25 12:45:07.818739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.818755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.819061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.819079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.819297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.819315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.819513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.819531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.819877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.819896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.820124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.820141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.820430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.820447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.820699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.820717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.820979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.821003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.821338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.821355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.821698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.821716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.821909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.821927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.822112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.822131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.822435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.822454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.822691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.822710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.823028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.823046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.823363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.823380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.823707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.823726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.823936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.823954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.824147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.824166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.824479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.824497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.824734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.824753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.824939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.824958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.825313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.825333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.825664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.825683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.825975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.825994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.826267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.826287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.826480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.826503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.826832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.826853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.827170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.827187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.827492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.827510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.827878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.827896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.828084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.828102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.828443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.828461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.828864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.828883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.829094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.829113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.829469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.829488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.829792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.829811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.830136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.830154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.830220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.830236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.830521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.830540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.830867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.830885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.831238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.831256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.831437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.831455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.831839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.831858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.832050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.832069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.832313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.832332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.832541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.832581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.832792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.832812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.833132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.833150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.833482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.833500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.833718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.833737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.833916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.833934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.834271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.834290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.834597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.834616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.834916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.834936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.835154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.835172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.835365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.835384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.835728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.835746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.646 [2024-07-25 12:45:07.835955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.646 [2024-07-25 12:45:07.835972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.646 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.836157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.836174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.836506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.836524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.836749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.836770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.836958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.836977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.837288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.837306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.837634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.837653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.838004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.838023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.838220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.838240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.838573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.838592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.838922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.838940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.839272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.839290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.839462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.839480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.839659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.839677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.840003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.840022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.840352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.840370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.840694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.840713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.841049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.841067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.841395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.841415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.841612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.841631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.841952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.841970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.842297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.842315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.842686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.842705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.843037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.843056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.843384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.843401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.843719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.843739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.844062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.844081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.844406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.844424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.844720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.844739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.845058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.845081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.845161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.845178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.845342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.845359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.845712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.845732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.845939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.845958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.846302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.846322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.846648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.846666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.846966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.846985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.847304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.847322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.847530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.847560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.847745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.847764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.847946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.847964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.848300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.848318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.848511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.848530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.848858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.848877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.849200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.849218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.849544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.849574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.849909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.849928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.850142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.850161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.850380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.850400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.850680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.850700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.850765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.850783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.851034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.851052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.851387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.851406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.851713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.851732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.852058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.647 [2024-07-25 12:45:07.852076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.647 qpair failed and we were unable to recover it. 00:32:34.647 [2024-07-25 12:45:07.852398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.852417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.852718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.852736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.853064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.853083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.853414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.853432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.853762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.853780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.853960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.853978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.854238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.854256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.854586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.854604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.854912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.854930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.855258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.855276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.855618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.855636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.855951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.855970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.856294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.856313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.856526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.856544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.856875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.856896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.856961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.856980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.857263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.857283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.857471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.857489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.857672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.857692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.858008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.858027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.858361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.858380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.858585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.858604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.858702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.858719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.859095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.859113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.859440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.859459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.859632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.859651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.859940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.859959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.860172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.860190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.860480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.860498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.860807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.860826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.861003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.861022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.861211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.861229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.861334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.861352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.861628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.861647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.862009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.862027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.862388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.862406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.862717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.862736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.863055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.863074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.863395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.863414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.863629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.863647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.863991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.864011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.864329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.864347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.864673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.864692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.864763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.864781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.865058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.865076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.865423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.865442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.865626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.865644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.865947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.865965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.866296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.866316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.866492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.866510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.866682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.866701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.867037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.867054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.867380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.867397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.867579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.867598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.867784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.867806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.868109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.868126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.868417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.868435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.868756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.868774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.869108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.869126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.869298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.869316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.869530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.869564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.869915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.648 [2024-07-25 12:45:07.869932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.648 qpair failed and we were unable to recover it. 00:32:34.648 [2024-07-25 12:45:07.870002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.870019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.870261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.870279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.870603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.870621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.870826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.870844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.871180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.871198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.871527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.871545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.871902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.871920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.872125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.872142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.872473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.872491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.872677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.872696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.872938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.872956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.873141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.873159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.873485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.873504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.873676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.873694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.873878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.873895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.874228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.874245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.874590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.874608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.874745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.874763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.875100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.875118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.875450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.875468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.875677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.875695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.875880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.875897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.876104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.876122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.876346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.876365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.876596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.876614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.876820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.876838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.877174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.877191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.877521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.877540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.877913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.877931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.878112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.878130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.878450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.878468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.878797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.878815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.879145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.879166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.879466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.879484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.879712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.879730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.880051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.880071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.880288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.880306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.880632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.880650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.881007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.881025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.881359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.881376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.881702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.881721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.882046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.882063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.882307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.882325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.882659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.882678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.882879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.882897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.883226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.883243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.883329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.883346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.883520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.883538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.883847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.883865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.884074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.884091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.884426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.884445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.884783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.884801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.885050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.885068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.885394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.885412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.885680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.885698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.886031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.886048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.886386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.886402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.886736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.886755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.886938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.886956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.887246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.887268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.887585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.887603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.887957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.887974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.888305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.649 [2024-07-25 12:45:07.888322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.649 qpair failed and we were unable to recover it. 00:32:34.649 [2024-07-25 12:45:07.888660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.888677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.889044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.889062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.889392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.889409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.889723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.889741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.889945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.889962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.890302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.890320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.890631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.890649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.890968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.890986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.891315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.891333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.891663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.891682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.891903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.891920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.892209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.892227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.892438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.892456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.892743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.892761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.893049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.893066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.893236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.893253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.893556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.893575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.893806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.893823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.894142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.894159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.894348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.894366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.894719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.894737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.895075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.895093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.895418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.895436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.895640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.650 [2024-07-25 12:45:07.895659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.650 qpair failed and we were unable to recover it. 00:32:34.650 [2024-07-25 12:45:07.896001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.896019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.896199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.896217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.896513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.896530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.896737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.896755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.896944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.896962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.897183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.897201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.897406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.897424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.897650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.897667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.897955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.897973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.898157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.898175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.898536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.898563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.898679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.898696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.899002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.899023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.899488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.899505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.899816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.899834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.900048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.900065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.900245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.900264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.900479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.900497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.900695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.900712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.901184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.901201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.901483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.901501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.901843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.901863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.902195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.902213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.902557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.902576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.902751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.902768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.903116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.903133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.903461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.903479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 [2024-07-25 12:45:07.903749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.903768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:34.651 [2024-07-25 12:45:07.904098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.651 [2024-07-25 12:45:07.904119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.651 qpair failed and we were unable to recover it. 00:32:34.651 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:32:34.651 [2024-07-25 12:45:07.904293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.904311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:34.652 [2024-07-25 12:45:07.904560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.904579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:34.652 [2024-07-25 12:45:07.904780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.904799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:34.652 [2024-07-25 12:45:07.905155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.905173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.905358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.905375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.905689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.905707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.905885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.905902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.906151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.906170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.906500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.906519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.906862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.906881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.907203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.907221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.907558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.907577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.907677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.907694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.907973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.907990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.908048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.908064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.908360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.908377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.908764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.908783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.908984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.909004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.909382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.909400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.909584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.909604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.909957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.909977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.910152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.910170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.910500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.910518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.910772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.910790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.910996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.911015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.911343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.911361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.911692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.911709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.912081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.912099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.912418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.912435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.912657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.912675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.912962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.912980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.913303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.913324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.913623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.913641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.913995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.914013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.914190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.914214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.914565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.914583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.914917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.914934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.915004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.915023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.915325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.915342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.915655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.915674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.916003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.916022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.916332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.916353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.916537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.916566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.916777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.916795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.917126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.917143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.917354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.917371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.917704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.917722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.917924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.917942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.918265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.918285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.918614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.918632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.918978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.918998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.919321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.919339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.919659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.919678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.920002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.920019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.920340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.920359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.920683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.920702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.920910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.920928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.921113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.921131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.921457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.921475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.921800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.921818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.922149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.922170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.922529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.922555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.922899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.922918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.652 qpair failed and we were unable to recover it. 00:32:34.652 [2024-07-25 12:45:07.923246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.652 [2024-07-25 12:45:07.923264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.923590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.923607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.923982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.924001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.924326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.924344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.924663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.924681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.925011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.925030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.925234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.925252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.925578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.925596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.925845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.925862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.926157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.926175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.926384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.926403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.926728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.926747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.927040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.927059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.927262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.927281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.927495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.927513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.927700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.927718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.928062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.928081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.928367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.928385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.928720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.928739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.929075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.929092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.929309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.929328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.929611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.929631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.929959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.929976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.930189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.930207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.930562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.930581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.930915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.930933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.931244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.931262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.931443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.931462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.931800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.931818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.932023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.932040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.932241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.932260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.932607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.932625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.932827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.932844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.933035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.933052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.933255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.933273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.933627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.933647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.933827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.933846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.934169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.934189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.934400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.934423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.934724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.934744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.935069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.935088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.935414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.935432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.935767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.935785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.936147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.936166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.936382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.936400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.936589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.936607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.936955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.936972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.937294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.937312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.937641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.937660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.937981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.937999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.938328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.938346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.938544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.938569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.938922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.938941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.939258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.939276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.939608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.939627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.939815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.939832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.940157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.940176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.940562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.940580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.940803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.940820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.941076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.941094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.941440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.941459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.941762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.941781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.942106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.942124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.942338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.942356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.942699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.942721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.942896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.942914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.653 [2024-07-25 12:45:07.943090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.653 [2024-07-25 12:45:07.943107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.653 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.943454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.943472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.943761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.943779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.944046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.944063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.944388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.944407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.944613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.944630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.944981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.944999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.945176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.945195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.945472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.945490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.945820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.945839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.945918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.945936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.946026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.946044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.946391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.946411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.946721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.946738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.654 [2024-07-25 12:45:07.947113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.947136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.947455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:34.654 [2024-07-25 12:45:07.947474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.947688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.947707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.654 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:34.654 [2024-07-25 12:45:07.948035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.948056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.948375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.948395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.948621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.948640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.949000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.949018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.949351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.949370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.949699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.949717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.950001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.950020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.950376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.950394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.950456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.950472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.950830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.950848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.951061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.951079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.951306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.951323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.951504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.951522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.951837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.951857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.952184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.952202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.952384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.952401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.952735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.952753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.953122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.953139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.953486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.953504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.953816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.953833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.954153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.954173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.954373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.954392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.954725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.954743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.955073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.955090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.955419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.955436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.955621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.955638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.955951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.955969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.956293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.956312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.956493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.956511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.956736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.956755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.957079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.957096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.957418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.957436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.957617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.957635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.957982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.958004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.958196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.958214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.958328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.958344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.958626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.958644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.958966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.958984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.959159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.959177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.959381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.959399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.959720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.959737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.960066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.960084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.960415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.960432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.960768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.960788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.654 [2024-07-25 12:45:07.961130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.654 [2024-07-25 12:45:07.961149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.654 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.961523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.961542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.961624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.961640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.961962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.961980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.962171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.962188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.962399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.962417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.962634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.962653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.962879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.962898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.963263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.963280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.963609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.963626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.963701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.963717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.963842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.963859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.964197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.964215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.964442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.964460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.964878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.964896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.965215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.965234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.965564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.965582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.965920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.965938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.966282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.966299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.966633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.966652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.966987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.967005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.967335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.967353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.967689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.967707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.967926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.967943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.968153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.968171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.968462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.968480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.968679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.968697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.968916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.968934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.969268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.969287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.969608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.969629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.969962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.969981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.970317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.970335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.970417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.970434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.970641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.970659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.970961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.970979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.971315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.971333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.971624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.971642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.971854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.971872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.972069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.972087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.972156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.972172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.972481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.972499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.972781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.972800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.973132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.973151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.973480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.973497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.973704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.973723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.974057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.974076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.974299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.974318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.974519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.974538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.974842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.974861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.975190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.975207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.975384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.975402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.975622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.975640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.975871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.975888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.976205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.976222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.976449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.976467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.976806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.976823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.977223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.977240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.977544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.977571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 Malloc0 00:32:34.655 [2024-07-25 12:45:07.977814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.977833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.978189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.978206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.978411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.978428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.655 [2024-07-25 12:45:07.978499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.978514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.978844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.978860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b9 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:34.655 0 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.655 [2024-07-25 12:45:07.979167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.979185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.979373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.979390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:34.655 [2024-07-25 12:45:07.979645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.979668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.980007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.655 [2024-07-25 12:45:07.980024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.655 qpair failed and we were unable to recover it. 00:32:34.655 [2024-07-25 12:45:07.980427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.980450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.980664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.980683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.981060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.981078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.981436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.981456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.981669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.981686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.981936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.981952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.982202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.982218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.982421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.982438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.982802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.982820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.983000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.983016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.983215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.983231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.983446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.983465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.984045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.984069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.984405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.984422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.984869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.984890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.985093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.985109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.985230] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.656 [2024-07-25 12:45:07.985454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.985473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.985675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.985693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.985875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.985890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.986156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.986173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.986533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.986558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.986919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.986942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.987138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.987157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.987393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.987409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.987723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.987742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.987829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.987846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.988140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.988159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.988481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.988499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.988697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.988716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.988981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.989000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.989211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.989228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.989573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.989591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.989680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.989693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.989981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.989998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.990323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.990341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.990537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.990567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.990912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.990930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.991245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.991264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.991470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.991489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.991795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.991813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.992149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.992171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.992367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.992384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.992732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.992749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.993080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.993097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.993416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.993433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.993630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.993646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.993854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.993871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.994098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.994116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.994308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.994326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.994529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.994555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.994852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.994870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.656 [2024-07-25 12:45:07.995190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.995208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.995292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.995309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:34.656 [2024-07-25 12:45:07.995591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.995608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.656 [2024-07-25 12:45:07.995959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.995978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 12:45:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:34.656 [2024-07-25 12:45:07.996305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.996322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.996659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.656 [2024-07-25 12:45:07.996676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.656 qpair failed and we were unable to recover it. 00:32:34.656 [2024-07-25 12:45:07.996861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.996879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:07.997220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.997238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:07.997572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.997589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:07.997684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.997698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:07.997996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.998013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:07.998208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.998224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:07.998435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.998452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:07.998781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.998798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:07.998991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.999010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:07.999346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.999364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:07.999693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.999711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:07.999894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:07.999909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.000109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.000127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.000466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.000484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.000924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.000945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.001312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.001330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.001677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.001696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.001896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.001914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.002247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.002266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.002597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.002615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.002807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.002823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.003048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.003070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.003391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.003410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.003757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.003774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.004140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.004159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.004496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.004514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.004883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.004902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.005202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.005218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.005544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.005574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.005945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.005964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.006052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.006065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.006307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.006324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.006574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.006590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.006935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.006952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.657 [2024-07-25 12:45:08.007164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.007188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.007392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.007409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:34.657 [2024-07-25 12:45:08.007714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.007732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.657 [2024-07-25 12:45:08.007920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.007938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:34.657 [2024-07-25 12:45:08.008131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.008150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.008488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.008505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.008838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.008857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.009055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.009070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.009412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.009429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.009650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.009666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.010012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.010028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.010287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.010306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.010518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.010539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.010757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.010777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.011087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.011106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.011431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.011448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.011774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.011792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.011992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.012008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.012287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.012306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.012622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.012640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.012850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.012865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.013084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.013102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.013403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.013420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.013627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.013644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.013890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.013907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.014252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.014269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.014610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.014629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.014824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.014842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.015015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.015031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.015239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.015256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.015582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.015600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.015987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.657 [2024-07-25 12:45:08.016005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.657 qpair failed and we were unable to recover it. 00:32:34.657 [2024-07-25 12:45:08.016331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.016349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.016681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.016698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.017030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.017047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.017405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.017425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.017722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.017740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.018072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.018091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.018453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.018471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.018710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.018727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.019056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.019073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.658 [2024-07-25 12:45:08.019402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.019421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.658 [2024-07-25 12:45:08.019607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.019624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.658 [2024-07-25 12:45:08.019920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.019938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:34.658 [2024-07-25 12:45:08.020157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.020173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.020400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.020415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.020639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.020658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.020974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.020993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.021330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.021347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.021683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.021701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.021921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.021943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.022157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.022173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.022354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.022369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.022699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.022716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.023038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.023054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.023280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.023298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.023519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.023535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.023944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.023962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.024276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.024292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.024622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.024639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.024977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.024994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.025216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.025233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.025561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.025579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.025797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.025813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.026149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.658 [2024-07-25 12:45:08.026166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdf54000b90 with addr=10.0.0.2, port=4420 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 [2024-07-25 12:45:08.026730] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.658 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.658 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:34.658 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.658 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:34.658 [2024-07-25 12:45:08.036319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.658 [2024-07-25 12:45:08.036444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.658 [2024-07-25 12:45:08.036480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.658 [2024-07-25 12:45:08.036492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.658 [2024-07-25 12:45:08.036503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.658 [2024-07-25 12:45:08.036535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.658 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.658 12:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 619539 00:32:34.658 [2024-07-25 12:45:08.046276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.658 [2024-07-25 12:45:08.046420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.658 [2024-07-25 12:45:08.046450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.658 [2024-07-25 12:45:08.046463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.658 [2024-07-25 12:45:08.046472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.658 [2024-07-25 12:45:08.046498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.658 qpair failed and we were unable to recover it. 00:32:34.920 [2024-07-25 12:45:08.056298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.920 [2024-07-25 12:45:08.056389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.920 [2024-07-25 12:45:08.056420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.920 [2024-07-25 12:45:08.056431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.920 [2024-07-25 12:45:08.056441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.920 [2024-07-25 12:45:08.056466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.920 qpair failed and we were unable to recover it. 00:32:34.920 [2024-07-25 12:45:08.066569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.920 [2024-07-25 12:45:08.066706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.920 [2024-07-25 12:45:08.066735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.920 [2024-07-25 12:45:08.066746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.920 [2024-07-25 12:45:08.066757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.920 [2024-07-25 12:45:08.066782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.920 qpair failed and we were unable to recover it. 00:32:34.920 [2024-07-25 12:45:08.076266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.920 [2024-07-25 12:45:08.076376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.920 [2024-07-25 12:45:08.076406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.920 [2024-07-25 12:45:08.076419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.920 [2024-07-25 12:45:08.076429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.920 [2024-07-25 12:45:08.076453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.920 qpair failed and we were unable to recover it. 00:32:34.920 [2024-07-25 12:45:08.086298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.920 [2024-07-25 12:45:08.086384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.920 [2024-07-25 12:45:08.086412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.920 [2024-07-25 12:45:08.086425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.920 [2024-07-25 12:45:08.086435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.920 [2024-07-25 12:45:08.086458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.920 qpair failed and we were unable to recover it. 00:32:34.920 [2024-07-25 12:45:08.096316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.920 [2024-07-25 12:45:08.096418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.920 [2024-07-25 12:45:08.096453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.920 [2024-07-25 12:45:08.096464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.920 [2024-07-25 12:45:08.096474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.920 [2024-07-25 12:45:08.096499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.920 qpair failed and we were unable to recover it. 00:32:34.920 [2024-07-25 12:45:08.106498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.920 [2024-07-25 12:45:08.106657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.920 [2024-07-25 12:45:08.106687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.920 [2024-07-25 12:45:08.106702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.920 [2024-07-25 12:45:08.106712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.920 [2024-07-25 12:45:08.106736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.920 qpair failed and we were unable to recover it. 00:32:34.920 [2024-07-25 12:45:08.116348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.920 [2024-07-25 12:45:08.116453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.116480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.116492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.116501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.116524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.126449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.126581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.126610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.126621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.126630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.126654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.136430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.136511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.136538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.136574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.136586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.136609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.146685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.146806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.146834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.146845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.146854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.146877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.156412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.156568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.156597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.156608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.156618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.156641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.166596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.166687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.166716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.166726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.166735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.166761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.176586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.176669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.176696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.176707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.176716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.176739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.186841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.186975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.187004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.187015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.187025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.187048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.196661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.196767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.196800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.196813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.196824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.196850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.206572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.206654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.206682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.206693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.206703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.206726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.216700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.216784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.216811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.216821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.216832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.216855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.227025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.227157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.227185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.227196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.227206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.227229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.236708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.921 [2024-07-25 12:45:08.236810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.921 [2024-07-25 12:45:08.236839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.921 [2024-07-25 12:45:08.236850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.921 [2024-07-25 12:45:08.236861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.921 [2024-07-25 12:45:08.236902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.921 qpair failed and we were unable to recover it. 00:32:34.921 [2024-07-25 12:45:08.246878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.922 [2024-07-25 12:45:08.246975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.922 [2024-07-25 12:45:08.247005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.922 [2024-07-25 12:45:08.247015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.922 [2024-07-25 12:45:08.247025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.922 [2024-07-25 12:45:08.247048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.922 qpair failed and we were unable to recover it. 00:32:34.922 [2024-07-25 12:45:08.256857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.922 [2024-07-25 12:45:08.256949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.922 [2024-07-25 12:45:08.256983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.922 [2024-07-25 12:45:08.256995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.922 [2024-07-25 12:45:08.257004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.922 [2024-07-25 12:45:08.257029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.922 qpair failed and we were unable to recover it. 00:32:34.922 [2024-07-25 12:45:08.267134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.922 [2024-07-25 12:45:08.267251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.922 [2024-07-25 12:45:08.267281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.922 [2024-07-25 12:45:08.267292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.922 [2024-07-25 12:45:08.267302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.922 [2024-07-25 12:45:08.267324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.922 qpair failed and we were unable to recover it. 00:32:34.922 [2024-07-25 12:45:08.276971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.922 [2024-07-25 12:45:08.277072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.922 [2024-07-25 12:45:08.277099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.922 [2024-07-25 12:45:08.277110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.922 [2024-07-25 12:45:08.277121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.922 [2024-07-25 12:45:08.277144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.922 qpair failed and we were unable to recover it. 00:32:34.922 [2024-07-25 12:45:08.286925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.922 [2024-07-25 12:45:08.287007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.922 [2024-07-25 12:45:08.287041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.922 [2024-07-25 12:45:08.287053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.922 [2024-07-25 12:45:08.287062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.922 [2024-07-25 12:45:08.287085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.922 qpair failed and we were unable to recover it. 00:32:34.922 [2024-07-25 12:45:08.297013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.922 [2024-07-25 12:45:08.297112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.922 [2024-07-25 12:45:08.297142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.922 [2024-07-25 12:45:08.297153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.922 [2024-07-25 12:45:08.297162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.922 [2024-07-25 12:45:08.297185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.922 qpair failed and we were unable to recover it. 00:32:34.922 [2024-07-25 12:45:08.307280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.922 [2024-07-25 12:45:08.307439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.922 [2024-07-25 12:45:08.307467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.922 [2024-07-25 12:45:08.307479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.922 [2024-07-25 12:45:08.307488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.922 [2024-07-25 12:45:08.307511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.922 qpair failed and we were unable to recover it. 00:32:34.922 [2024-07-25 12:45:08.317154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.922 [2024-07-25 12:45:08.317291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.922 [2024-07-25 12:45:08.317320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.922 [2024-07-25 12:45:08.317331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.922 [2024-07-25 12:45:08.317341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.922 [2024-07-25 12:45:08.317364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.922 qpair failed and we were unable to recover it. 00:32:34.922 [2024-07-25 12:45:08.328036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.922 [2024-07-25 12:45:08.328159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.922 [2024-07-25 12:45:08.328188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.922 [2024-07-25 12:45:08.328200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.922 [2024-07-25 12:45:08.328215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.922 [2024-07-25 12:45:08.328238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.922 qpair failed and we were unable to recover it. 00:32:34.922 [2024-07-25 12:45:08.337118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:34.922 [2024-07-25 12:45:08.337232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:34.922 [2024-07-25 12:45:08.337261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:34.922 [2024-07-25 12:45:08.337272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:34.922 [2024-07-25 12:45:08.337282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:34.922 [2024-07-25 12:45:08.337305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:34.922 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-25 12:45:08.347448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.186 [2024-07-25 12:45:08.347586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.186 [2024-07-25 12:45:08.347615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.186 [2024-07-25 12:45:08.347627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.186 [2024-07-25 12:45:08.347637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.186 [2024-07-25 12:45:08.347660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-25 12:45:08.357258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.186 [2024-07-25 12:45:08.357363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.186 [2024-07-25 12:45:08.357390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.186 [2024-07-25 12:45:08.357403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.186 [2024-07-25 12:45:08.357414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.186 [2024-07-25 12:45:08.357437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-25 12:45:08.367164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.186 [2024-07-25 12:45:08.367252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.186 [2024-07-25 12:45:08.367283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.186 [2024-07-25 12:45:08.367295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.186 [2024-07-25 12:45:08.367305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.186 [2024-07-25 12:45:08.367330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-25 12:45:08.377226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.186 [2024-07-25 12:45:08.377327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.186 [2024-07-25 12:45:08.377355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.186 [2024-07-25 12:45:08.377368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.186 [2024-07-25 12:45:08.377378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.186 [2024-07-25 12:45:08.377401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-25 12:45:08.387569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.186 [2024-07-25 12:45:08.387692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.186 [2024-07-25 12:45:08.387720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.186 [2024-07-25 12:45:08.387732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.186 [2024-07-25 12:45:08.387744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.186 [2024-07-25 12:45:08.387768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-25 12:45:08.397322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.186 [2024-07-25 12:45:08.397430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.186 [2024-07-25 12:45:08.397459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.186 [2024-07-25 12:45:08.397469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.186 [2024-07-25 12:45:08.397479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.186 [2024-07-25 12:45:08.397501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-25 12:45:08.407347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.186 [2024-07-25 12:45:08.407474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.186 [2024-07-25 12:45:08.407502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.186 [2024-07-25 12:45:08.407514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.186 [2024-07-25 12:45:08.407523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.186 [2024-07-25 12:45:08.407554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-25 12:45:08.417275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.186 [2024-07-25 12:45:08.417368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.186 [2024-07-25 12:45:08.417395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.186 [2024-07-25 12:45:08.417415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.186 [2024-07-25 12:45:08.417425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.186 [2024-07-25 12:45:08.417447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-25 12:45:08.427724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.186 [2024-07-25 12:45:08.427847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.186 [2024-07-25 12:45:08.427877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.186 [2024-07-25 12:45:08.427889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.186 [2024-07-25 12:45:08.427898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.186 [2024-07-25 12:45:08.427921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-25 12:45:08.437434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.186 [2024-07-25 12:45:08.437533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.186 [2024-07-25 12:45:08.437569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.186 [2024-07-25 12:45:08.437580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.437591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.437616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.447523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.447633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.447660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.447672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.447681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.447703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.457565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.457669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.457695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.457706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.457715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.457738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.467857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.467976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.468004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.468015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.468024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.468048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.477618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.477728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.477759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.477770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.477780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.477802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.487617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.487708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.487735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.487747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.487757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.487780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.497678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.497767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.497794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.497806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.497816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.497839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.508021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.508144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.508173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.508190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.508200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.508223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.517748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.517847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.517875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.517886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.517897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.517921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.527758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.527841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.527873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.527886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.527895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.527920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.537799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.537883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.537911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.537922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.537932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.537955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.548121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.548246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.548274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.548286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.548295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.548318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.557783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.557881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.187 [2024-07-25 12:45:08.557908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.187 [2024-07-25 12:45:08.557919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.187 [2024-07-25 12:45:08.557930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.187 [2024-07-25 12:45:08.557953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-25 12:45:08.567884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.187 [2024-07-25 12:45:08.567976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.188 [2024-07-25 12:45:08.568003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.188 [2024-07-25 12:45:08.568015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.188 [2024-07-25 12:45:08.568026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.188 [2024-07-25 12:45:08.568049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-25 12:45:08.577925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.188 [2024-07-25 12:45:08.578015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.188 [2024-07-25 12:45:08.578042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.188 [2024-07-25 12:45:08.578055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.188 [2024-07-25 12:45:08.578065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.188 [2024-07-25 12:45:08.578089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-25 12:45:08.588219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.188 [2024-07-25 12:45:08.588338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.188 [2024-07-25 12:45:08.588366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.188 [2024-07-25 12:45:08.588378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.188 [2024-07-25 12:45:08.588388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.188 [2024-07-25 12:45:08.588411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-25 12:45:08.598005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.188 [2024-07-25 12:45:08.598113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.188 [2024-07-25 12:45:08.598147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.188 [2024-07-25 12:45:08.598160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.188 [2024-07-25 12:45:08.598169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.188 [2024-07-25 12:45:08.598192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.450 [2024-07-25 12:45:08.607909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.450 [2024-07-25 12:45:08.608014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.450 [2024-07-25 12:45:08.608044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.450 [2024-07-25 12:45:08.608055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.450 [2024-07-25 12:45:08.608065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.450 [2024-07-25 12:45:08.608088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.450 qpair failed and we were unable to recover it. 00:32:35.450 [2024-07-25 12:45:08.618068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.450 [2024-07-25 12:45:08.618158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.450 [2024-07-25 12:45:08.618185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.450 [2024-07-25 12:45:08.618197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.450 [2024-07-25 12:45:08.618206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.450 [2024-07-25 12:45:08.618230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.450 qpair failed and we were unable to recover it. 00:32:35.450 [2024-07-25 12:45:08.628376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.450 [2024-07-25 12:45:08.628490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.450 [2024-07-25 12:45:08.628518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.450 [2024-07-25 12:45:08.628529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.450 [2024-07-25 12:45:08.628538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.450 [2024-07-25 12:45:08.628569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.450 qpair failed and we were unable to recover it. 00:32:35.450 [2024-07-25 12:45:08.638036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.450 [2024-07-25 12:45:08.638137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.450 [2024-07-25 12:45:08.638166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.450 [2024-07-25 12:45:08.638177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.638187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.638219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.648169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.648298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.648326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.648337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.648346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.648369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.658179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.658262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.658289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.658300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.658310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.658334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.668492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.668621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.668649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.668661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.668670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.668693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.678249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.678371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.678399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.678410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.678420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.678443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.688173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.688294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.688329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.688341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.688350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.688389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.698293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.698386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.698414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.698426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.698435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.698458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.708664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.708782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.708811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.708822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.708832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.708855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.718363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.718466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.718493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.718506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.718515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.718538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.728449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.728531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.728566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.728577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.728594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.728617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.738469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.738560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.738589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.738601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.738611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.738633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.748765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.748883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.748911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.748922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.748931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.748954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.758597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.758703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.451 [2024-07-25 12:45:08.758731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.451 [2024-07-25 12:45:08.758741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.451 [2024-07-25 12:45:08.758750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.451 [2024-07-25 12:45:08.758775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.451 qpair failed and we were unable to recover it. 00:32:35.451 [2024-07-25 12:45:08.768459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.451 [2024-07-25 12:45:08.768543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.452 [2024-07-25 12:45:08.768578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.452 [2024-07-25 12:45:08.768590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.452 [2024-07-25 12:45:08.768599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.452 [2024-07-25 12:45:08.768623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.452 qpair failed and we were unable to recover it. 00:32:35.452 [2024-07-25 12:45:08.778585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.452 [2024-07-25 12:45:08.778706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.452 [2024-07-25 12:45:08.778735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.452 [2024-07-25 12:45:08.778747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.452 [2024-07-25 12:45:08.778756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.452 [2024-07-25 12:45:08.778780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.452 qpair failed and we were unable to recover it. 00:32:35.452 [2024-07-25 12:45:08.788919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.452 [2024-07-25 12:45:08.789088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.452 [2024-07-25 12:45:08.789115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.452 [2024-07-25 12:45:08.789127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.452 [2024-07-25 12:45:08.789137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.452 [2024-07-25 12:45:08.789159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.452 qpair failed and we were unable to recover it. 00:32:35.452 [2024-07-25 12:45:08.798662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.452 [2024-07-25 12:45:08.798769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.452 [2024-07-25 12:45:08.798795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.452 [2024-07-25 12:45:08.798808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.452 [2024-07-25 12:45:08.798818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.452 [2024-07-25 12:45:08.798841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.452 qpair failed and we were unable to recover it. 00:32:35.452 [2024-07-25 12:45:08.808690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.452 [2024-07-25 12:45:08.808774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.452 [2024-07-25 12:45:08.808800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.452 [2024-07-25 12:45:08.808814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.452 [2024-07-25 12:45:08.808824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.452 [2024-07-25 12:45:08.808847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.452 qpair failed and we were unable to recover it. 00:32:35.452 [2024-07-25 12:45:08.818696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.452 [2024-07-25 12:45:08.818775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.452 [2024-07-25 12:45:08.818802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.452 [2024-07-25 12:45:08.818813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.452 [2024-07-25 12:45:08.818830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.452 [2024-07-25 12:45:08.818853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.452 qpair failed and we were unable to recover it. 00:32:35.452 [2024-07-25 12:45:08.829051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.452 [2024-07-25 12:45:08.829178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.452 [2024-07-25 12:45:08.829211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.452 [2024-07-25 12:45:08.829222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.452 [2024-07-25 12:45:08.829231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.452 [2024-07-25 12:45:08.829256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.452 qpair failed and we were unable to recover it. 00:32:35.452 [2024-07-25 12:45:08.838802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.452 [2024-07-25 12:45:08.838918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.452 [2024-07-25 12:45:08.838946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.452 [2024-07-25 12:45:08.838958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.452 [2024-07-25 12:45:08.838968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.452 [2024-07-25 12:45:08.838992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.452 qpair failed and we were unable to recover it. 00:32:35.452 [2024-07-25 12:45:08.848846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.452 [2024-07-25 12:45:08.848928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.452 [2024-07-25 12:45:08.848955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.452 [2024-07-25 12:45:08.848968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.452 [2024-07-25 12:45:08.848979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.452 [2024-07-25 12:45:08.849002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.452 qpair failed and we were unable to recover it. 00:32:35.452 [2024-07-25 12:45:08.858892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.452 [2024-07-25 12:45:08.859018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.452 [2024-07-25 12:45:08.859046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.452 [2024-07-25 12:45:08.859058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.452 [2024-07-25 12:45:08.859067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.452 [2024-07-25 12:45:08.859090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.452 qpair failed and we were unable to recover it. 00:32:35.714 [2024-07-25 12:45:08.869170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.714 [2024-07-25 12:45:08.869297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.714 [2024-07-25 12:45:08.869325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.714 [2024-07-25 12:45:08.869336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.714 [2024-07-25 12:45:08.869345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.714 [2024-07-25 12:45:08.869368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.714 qpair failed and we were unable to recover it. 00:32:35.714 [2024-07-25 12:45:08.878968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.714 [2024-07-25 12:45:08.879100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.714 [2024-07-25 12:45:08.879129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.714 [2024-07-25 12:45:08.879140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.714 [2024-07-25 12:45:08.879149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.714 [2024-07-25 12:45:08.879172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.714 qpair failed and we were unable to recover it. 00:32:35.714 [2024-07-25 12:45:08.888873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.714 [2024-07-25 12:45:08.888988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.714 [2024-07-25 12:45:08.889016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.714 [2024-07-25 12:45:08.889027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.714 [2024-07-25 12:45:08.889036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.714 [2024-07-25 12:45:08.889058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.714 qpair failed and we were unable to recover it. 00:32:35.714 [2024-07-25 12:45:08.899000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:08.899080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:08.899107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:08.899117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:08.899128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:08.899151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:08.909340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:08.909497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:08.909525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:08.909559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:08.909569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:08.909592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:08.919085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:08.919189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:08.919216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:08.919227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:08.919236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:08.919261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:08.929094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:08.929174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:08.929201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:08.929212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:08.929223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:08.929245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:08.939133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:08.939221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:08.939249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:08.939261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:08.939271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:08.939295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:08.949437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:08.949558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:08.949587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:08.949599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:08.949608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:08.949632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:08.959115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:08.959223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:08.959250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:08.959262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:08.959272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:08.959295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:08.969265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:08.969346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:08.969373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:08.969385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:08.969395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:08.969418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:08.979285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:08.979371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:08.979399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:08.979411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:08.979421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:08.979443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:08.989458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:08.989626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:08.989654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:08.989665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:08.989675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:08.989698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:08.999338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:08.999438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:08.999472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:08.999483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:08.999493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:08.999515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:09.009400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:09.009492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:09.009520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:09.009532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:09.009542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.715 [2024-07-25 12:45:09.009572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.715 qpair failed and we were unable to recover it. 00:32:35.715 [2024-07-25 12:45:09.019429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.715 [2024-07-25 12:45:09.019520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.715 [2024-07-25 12:45:09.019557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.715 [2024-07-25 12:45:09.019569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.715 [2024-07-25 12:45:09.019581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.019606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.716 [2024-07-25 12:45:09.029726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.716 [2024-07-25 12:45:09.029919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.716 [2024-07-25 12:45:09.029947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.716 [2024-07-25 12:45:09.029958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.716 [2024-07-25 12:45:09.029968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.029991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.716 [2024-07-25 12:45:09.039505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.716 [2024-07-25 12:45:09.039614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.716 [2024-07-25 12:45:09.039656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.716 [2024-07-25 12:45:09.039667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.716 [2024-07-25 12:45:09.039677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.039707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.716 [2024-07-25 12:45:09.049424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.716 [2024-07-25 12:45:09.049522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.716 [2024-07-25 12:45:09.049559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.716 [2024-07-25 12:45:09.049570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.716 [2024-07-25 12:45:09.049581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.049605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.716 [2024-07-25 12:45:09.059529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.716 [2024-07-25 12:45:09.059668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.716 [2024-07-25 12:45:09.059696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.716 [2024-07-25 12:45:09.059707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.716 [2024-07-25 12:45:09.059716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.059739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.716 [2024-07-25 12:45:09.069876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.716 [2024-07-25 12:45:09.070002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.716 [2024-07-25 12:45:09.070029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.716 [2024-07-25 12:45:09.070040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.716 [2024-07-25 12:45:09.070049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.070072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.716 [2024-07-25 12:45:09.079509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.716 [2024-07-25 12:45:09.079618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.716 [2024-07-25 12:45:09.079645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.716 [2024-07-25 12:45:09.079656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.716 [2024-07-25 12:45:09.079665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.079688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.716 [2024-07-25 12:45:09.089617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.716 [2024-07-25 12:45:09.089699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.716 [2024-07-25 12:45:09.089733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.716 [2024-07-25 12:45:09.089745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.716 [2024-07-25 12:45:09.089755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.089778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.716 [2024-07-25 12:45:09.099558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.716 [2024-07-25 12:45:09.099640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.716 [2024-07-25 12:45:09.099670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.716 [2024-07-25 12:45:09.099681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.716 [2024-07-25 12:45:09.099692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.099716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.716 [2024-07-25 12:45:09.109982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.716 [2024-07-25 12:45:09.110105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.716 [2024-07-25 12:45:09.110138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.716 [2024-07-25 12:45:09.110150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.716 [2024-07-25 12:45:09.110160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.110184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.716 [2024-07-25 12:45:09.119772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.716 [2024-07-25 12:45:09.119877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.716 [2024-07-25 12:45:09.119906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.716 [2024-07-25 12:45:09.119917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.716 [2024-07-25 12:45:09.119927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.119951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.716 [2024-07-25 12:45:09.129784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.716 [2024-07-25 12:45:09.129865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.716 [2024-07-25 12:45:09.129893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.716 [2024-07-25 12:45:09.129904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.716 [2024-07-25 12:45:09.129914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.716 [2024-07-25 12:45:09.129945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.716 qpair failed and we were unable to recover it. 00:32:35.978 [2024-07-25 12:45:09.139850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.978 [2024-07-25 12:45:09.139936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.978 [2024-07-25 12:45:09.139964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.139976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.139985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.140008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.150150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.150278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.150308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.150320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.150331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.150353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.159912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.160014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.160042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.160053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.160064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.160087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.169912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.170003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.170031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.170042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.170053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.170077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.179976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.180068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.180096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.180107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.180116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.180140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.190323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.190458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.190488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.190500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.190510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.190533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.200060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.200155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.200183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.200194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.200203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.200226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.210091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.210184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.210212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.210224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.210237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.210261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.219977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.220069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.220099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.220110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.220128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.220162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.230410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.230588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.230619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.230632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.230642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.230666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.240190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.240295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.240323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.240335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.240345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.240369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.250220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.250318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.250346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.250357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.250367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.250391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.260228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.979 [2024-07-25 12:45:09.260311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.979 [2024-07-25 12:45:09.260338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.979 [2024-07-25 12:45:09.260349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.979 [2024-07-25 12:45:09.260358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.979 [2024-07-25 12:45:09.260382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.979 qpair failed and we were unable to recover it. 00:32:35.979 [2024-07-25 12:45:09.270536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.270670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.270702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.270714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.270724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.270748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.280224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.280327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.280356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.280367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.280377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.280401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.290347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.290439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.290467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.290477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.290488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.290512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.300331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.300413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.300441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.300452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.300461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.300484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.310718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.310837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.310868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.310887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.310898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.310921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.320454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.320570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.320597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.320608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.320619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.320642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.330478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.330599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.330628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.330640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.330650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.330673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.340502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.340601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.340629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.340640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.340651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.340674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.350802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.350924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.350953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.350966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.350977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.351000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.360586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.360692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.360720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.360731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.360742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.360764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.370531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.370635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.370662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.370673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.370683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.370706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.380640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.380738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.380765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.380777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.380789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.380812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.980 qpair failed and we were unable to recover it. 00:32:35.980 [2024-07-25 12:45:09.390923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:35.980 [2024-07-25 12:45:09.391052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:35.980 [2024-07-25 12:45:09.391081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:35.980 [2024-07-25 12:45:09.391093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:35.980 [2024-07-25 12:45:09.391104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:35.980 [2024-07-25 12:45:09.391127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:35.981 qpair failed and we were unable to recover it. 00:32:36.242 [2024-07-25 12:45:09.400686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.242 [2024-07-25 12:45:09.400793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.242 [2024-07-25 12:45:09.400828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.242 [2024-07-25 12:45:09.400841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.242 [2024-07-25 12:45:09.400850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.400874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.410735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.410846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.410873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.410885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.410896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.410918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.420667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.420754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.420782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.420793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.420804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.420826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.431096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.431243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.431272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.431285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.431295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.431318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.440833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.440939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.440969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.440980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.440991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.441021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.450777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.450860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.450889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.450900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.450910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.450941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.460872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.460957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.460986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.460997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.461007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.461031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.471208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.471338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.471371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.471382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.471391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.471417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.481010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.481152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.481184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.481196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.481207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.481231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.491033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.491123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.491158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.491169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.491181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.491207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.501058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.501199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.501230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.501241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.501252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.501275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.511320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.511437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.511465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.511477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.511487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.511510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.521025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.521125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.521153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.243 [2024-07-25 12:45:09.521164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.243 [2024-07-25 12:45:09.521173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.243 [2024-07-25 12:45:09.521199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.243 qpair failed and we were unable to recover it. 00:32:36.243 [2024-07-25 12:45:09.531149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.243 [2024-07-25 12:45:09.531239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.243 [2024-07-25 12:45:09.531267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.531278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.531288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.531324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.541189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.541271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.541299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.541310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.541322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.541345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.551455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.551583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.551613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.551626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.551637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.551660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.561265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.561360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.561388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.561398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.561407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.561431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.571280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.571375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.571403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.571415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.571425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.571448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.581298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.581423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.581460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.581471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.581481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.581504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.591672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.591845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.591874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.591886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.591896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.591920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.601396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.601494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.601521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.601532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.601542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.601573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.611280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.611358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.611386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.611397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.611406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.611429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.621429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.621517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.621545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.621567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.621585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.621609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.631726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.631848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.631878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.631891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.631901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.631924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.641507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.641612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.641641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.641651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.641662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.641685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.244 [2024-07-25 12:45:09.651524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.244 [2024-07-25 12:45:09.651619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.244 [2024-07-25 12:45:09.651646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.244 [2024-07-25 12:45:09.651657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.244 [2024-07-25 12:45:09.651669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.244 [2024-07-25 12:45:09.651691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.244 qpair failed and we were unable to recover it. 00:32:36.506 [2024-07-25 12:45:09.661526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.506 [2024-07-25 12:45:09.661610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.506 [2024-07-25 12:45:09.661638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.506 [2024-07-25 12:45:09.661649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.506 [2024-07-25 12:45:09.661659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.506 [2024-07-25 12:45:09.661683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.506 qpair failed and we were unable to recover it. 00:32:36.506 [2024-07-25 12:45:09.671793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.506 [2024-07-25 12:45:09.671937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.506 [2024-07-25 12:45:09.671965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.506 [2024-07-25 12:45:09.671976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.506 [2024-07-25 12:45:09.671986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.506 [2024-07-25 12:45:09.672008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.506 qpair failed and we were unable to recover it. 00:32:36.506 [2024-07-25 12:45:09.681631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.506 [2024-07-25 12:45:09.681727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.506 [2024-07-25 12:45:09.681753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.506 [2024-07-25 12:45:09.681763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.506 [2024-07-25 12:45:09.681772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.506 [2024-07-25 12:45:09.681795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.506 qpair failed and we were unable to recover it. 00:32:36.506 [2024-07-25 12:45:09.691535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.506 [2024-07-25 12:45:09.691652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.506 [2024-07-25 12:45:09.691677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.506 [2024-07-25 12:45:09.691688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.506 [2024-07-25 12:45:09.691698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.506 [2024-07-25 12:45:09.691729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.506 qpair failed and we were unable to recover it. 00:32:36.506 [2024-07-25 12:45:09.701681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.506 [2024-07-25 12:45:09.701766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.506 [2024-07-25 12:45:09.701790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.506 [2024-07-25 12:45:09.701800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.506 [2024-07-25 12:45:09.701809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.506 [2024-07-25 12:45:09.701831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.506 qpair failed and we were unable to recover it. 00:32:36.506 [2024-07-25 12:45:09.711961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.506 [2024-07-25 12:45:09.712077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.506 [2024-07-25 12:45:09.712101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.506 [2024-07-25 12:45:09.712118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.506 [2024-07-25 12:45:09.712128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.506 [2024-07-25 12:45:09.712150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.506 qpair failed and we were unable to recover it. 00:32:36.506 [2024-07-25 12:45:09.721773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.506 [2024-07-25 12:45:09.721866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.506 [2024-07-25 12:45:09.721890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.506 [2024-07-25 12:45:09.721900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.506 [2024-07-25 12:45:09.721910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.506 [2024-07-25 12:45:09.721930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.506 qpair failed and we were unable to recover it. 00:32:36.506 [2024-07-25 12:45:09.731677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.506 [2024-07-25 12:45:09.731799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.507 [2024-07-25 12:45:09.731823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.507 [2024-07-25 12:45:09.731834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.507 [2024-07-25 12:45:09.731845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.507 [2024-07-25 12:45:09.731866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.507 qpair failed and we were unable to recover it. 00:32:36.507 [2024-07-25 12:45:09.741853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.507 [2024-07-25 12:45:09.741935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.507 [2024-07-25 12:45:09.741958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.507 [2024-07-25 12:45:09.741968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.507 [2024-07-25 12:45:09.741977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.507 [2024-07-25 12:45:09.742000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.507 qpair failed and we were unable to recover it. 00:32:36.507 [2024-07-25 12:45:09.752209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.507 [2024-07-25 12:45:09.752394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.507 [2024-07-25 12:45:09.752418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.507 [2024-07-25 12:45:09.752429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.507 [2024-07-25 12:45:09.752440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.507 [2024-07-25 12:45:09.752461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.507 qpair failed and we were unable to recover it. 00:32:36.507 [2024-07-25 12:45:09.761916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.507 [2024-07-25 12:45:09.762011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.507 [2024-07-25 12:45:09.762033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.507 [2024-07-25 12:45:09.762044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.507 [2024-07-25 12:45:09.762052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.507 [2024-07-25 12:45:09.762073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.507 qpair failed and we were unable to recover it. 00:32:36.507 [2024-07-25 12:45:09.771832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.507 [2024-07-25 12:45:09.771905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.507 [2024-07-25 12:45:09.771928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.507 [2024-07-25 12:45:09.771939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.507 [2024-07-25 12:45:09.771948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.507 [2024-07-25 12:45:09.771968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.507 qpair failed and we were unable to recover it. 00:32:36.507 [2024-07-25 12:45:09.781983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.507 [2024-07-25 12:45:09.782060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.507 [2024-07-25 12:45:09.782082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.507 [2024-07-25 12:45:09.782093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.507 [2024-07-25 12:45:09.782103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.507 [2024-07-25 12:45:09.782123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.507 qpair failed and we were unable to recover it. 00:32:36.507 [2024-07-25 12:45:09.792165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.507 [2024-07-25 12:45:09.792280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.507 [2024-07-25 12:45:09.792304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.507 [2024-07-25 12:45:09.792315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.507 [2024-07-25 12:45:09.792325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.507 [2024-07-25 12:45:09.792345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.507 qpair failed and we were unable to recover it. 00:32:36.507 [2024-07-25 12:45:09.802047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.507 [2024-07-25 12:45:09.802162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.507 [2024-07-25 12:45:09.802185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.507 [2024-07-25 12:45:09.802201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.507 [2024-07-25 12:45:09.802210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.507 [2024-07-25 12:45:09.802230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.507 qpair failed and we were unable to recover it. 00:32:36.507 [2024-07-25 12:45:09.812085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.507 [2024-07-25 12:45:09.812179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.507 [2024-07-25 12:45:09.812201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.507 [2024-07-25 12:45:09.812212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.507 [2024-07-25 12:45:09.812221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.507 [2024-07-25 12:45:09.812241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.507 qpair failed and we were unable to recover it. 00:32:36.507 [2024-07-25 12:45:09.822082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.507 [2024-07-25 12:45:09.822189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.507 [2024-07-25 12:45:09.822212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.507 [2024-07-25 12:45:09.822223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.507 [2024-07-25 12:45:09.822233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.507 [2024-07-25 12:45:09.822254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.507 qpair failed and we were unable to recover it. 00:32:36.507 [2024-07-25 12:45:09.832408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.507 [2024-07-25 12:45:09.832534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.507 [2024-07-25 12:45:09.832563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.507 [2024-07-25 12:45:09.832574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.507 [2024-07-25 12:45:09.832584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.507 [2024-07-25 12:45:09.832604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.507 qpair failed and we were unable to recover it. 00:32:36.507 [2024-07-25 12:45:09.842066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.507 [2024-07-25 12:45:09.842163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.508 [2024-07-25 12:45:09.842185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.508 [2024-07-25 12:45:09.842196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.508 [2024-07-25 12:45:09.842206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.508 [2024-07-25 12:45:09.842226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.508 qpair failed and we were unable to recover it. 00:32:36.508 [2024-07-25 12:45:09.852207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.508 [2024-07-25 12:45:09.852280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.508 [2024-07-25 12:45:09.852301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.508 [2024-07-25 12:45:09.852312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.508 [2024-07-25 12:45:09.852321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.508 [2024-07-25 12:45:09.852341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.508 qpair failed and we were unable to recover it. 00:32:36.508 [2024-07-25 12:45:09.862218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.508 [2024-07-25 12:45:09.862294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.508 [2024-07-25 12:45:09.862314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.508 [2024-07-25 12:45:09.862325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.508 [2024-07-25 12:45:09.862335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.508 [2024-07-25 12:45:09.862355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.508 qpair failed and we were unable to recover it. 00:32:36.508 [2024-07-25 12:45:09.872463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.508 [2024-07-25 12:45:09.872633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.508 [2024-07-25 12:45:09.872657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.508 [2024-07-25 12:45:09.872668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.508 [2024-07-25 12:45:09.872677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.508 [2024-07-25 12:45:09.872699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.508 qpair failed and we were unable to recover it. 00:32:36.508 [2024-07-25 12:45:09.882281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.508 [2024-07-25 12:45:09.882366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.508 [2024-07-25 12:45:09.882387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.508 [2024-07-25 12:45:09.882397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.508 [2024-07-25 12:45:09.882406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.508 [2024-07-25 12:45:09.882426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.508 qpair failed and we were unable to recover it. 00:32:36.508 [2024-07-25 12:45:09.892199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.508 [2024-07-25 12:45:09.892273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.508 [2024-07-25 12:45:09.892299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.508 [2024-07-25 12:45:09.892310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.508 [2024-07-25 12:45:09.892318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.508 [2024-07-25 12:45:09.892338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.508 qpair failed and we were unable to recover it. 00:32:36.508 [2024-07-25 12:45:09.902391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.508 [2024-07-25 12:45:09.902477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.508 [2024-07-25 12:45:09.902498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.508 [2024-07-25 12:45:09.902508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.508 [2024-07-25 12:45:09.902518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.508 [2024-07-25 12:45:09.902538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.508 qpair failed and we were unable to recover it. 00:32:36.508 [2024-07-25 12:45:09.912561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.508 [2024-07-25 12:45:09.912671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.508 [2024-07-25 12:45:09.912692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.508 [2024-07-25 12:45:09.912703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.508 [2024-07-25 12:45:09.912713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.508 [2024-07-25 12:45:09.912732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.508 qpair failed and we were unable to recover it. 00:32:36.508 [2024-07-25 12:45:09.922385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.508 [2024-07-25 12:45:09.922476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.508 [2024-07-25 12:45:09.922497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.508 [2024-07-25 12:45:09.922507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.508 [2024-07-25 12:45:09.922517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.508 [2024-07-25 12:45:09.922536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.508 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:09.932454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:09.932536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:09.932564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.770 [2024-07-25 12:45:09.932574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.770 [2024-07-25 12:45:09.932583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.770 [2024-07-25 12:45:09.932607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.770 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:09.942460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:09.942537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:09.942564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.770 [2024-07-25 12:45:09.942574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.770 [2024-07-25 12:45:09.942584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.770 [2024-07-25 12:45:09.942604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.770 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:09.952743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:09.952893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:09.952914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.770 [2024-07-25 12:45:09.952925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.770 [2024-07-25 12:45:09.952934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.770 [2024-07-25 12:45:09.952954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.770 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:09.962535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:09.962634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:09.962656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.770 [2024-07-25 12:45:09.962666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.770 [2024-07-25 12:45:09.962676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.770 [2024-07-25 12:45:09.962696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.770 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:09.972430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:09.972501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:09.972521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.770 [2024-07-25 12:45:09.972531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.770 [2024-07-25 12:45:09.972540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.770 [2024-07-25 12:45:09.972566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.770 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:09.982570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:09.982647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:09.982673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.770 [2024-07-25 12:45:09.982684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.770 [2024-07-25 12:45:09.982693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.770 [2024-07-25 12:45:09.982713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.770 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:09.992924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:09.993038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:09.993060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.770 [2024-07-25 12:45:09.993070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.770 [2024-07-25 12:45:09.993079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.770 [2024-07-25 12:45:09.993099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.770 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:10.002672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:10.002767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:10.002790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.770 [2024-07-25 12:45:10.002801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.770 [2024-07-25 12:45:10.002811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.770 [2024-07-25 12:45:10.002833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.770 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:10.012686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:10.012758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:10.012780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.770 [2024-07-25 12:45:10.012791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.770 [2024-07-25 12:45:10.012800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.770 [2024-07-25 12:45:10.012820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.770 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:10.022620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:10.022697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:10.022718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.770 [2024-07-25 12:45:10.022729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.770 [2024-07-25 12:45:10.022744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.770 [2024-07-25 12:45:10.022764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.770 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:10.033064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:10.033181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:10.033202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.770 [2024-07-25 12:45:10.033212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.770 [2024-07-25 12:45:10.033223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.770 [2024-07-25 12:45:10.033244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.770 qpair failed and we were unable to recover it. 00:32:36.770 [2024-07-25 12:45:10.042846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.770 [2024-07-25 12:45:10.042977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.770 [2024-07-25 12:45:10.042998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.043008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.043017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.043038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.052713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.052792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.052815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.052826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.052835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.052855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.062758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.062863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.062885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.062895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.062905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.062924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.073195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.073317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.073339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.073349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.073358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.073378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.082842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.082935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.082955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.082966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.082975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.082995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.092864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.092945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.092966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.092976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.092985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.093005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.102985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.103061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.103081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.103091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.103100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.103120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.113307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.113420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.113442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.113458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.113467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.113486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.123080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.123185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.123207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.123217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.123228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.123249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.133101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.133219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.133241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.133251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.133261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.133281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.143155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.143238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.143259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.143270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.143279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.143300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.153403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.153519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.153540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.153559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.153568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.153588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.163204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.771 [2024-07-25 12:45:10.163291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.771 [2024-07-25 12:45:10.163313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.771 [2024-07-25 12:45:10.163324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.771 [2024-07-25 12:45:10.163333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.771 [2024-07-25 12:45:10.163353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.771 qpair failed and we were unable to recover it. 00:32:36.771 [2024-07-25 12:45:10.173385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.772 [2024-07-25 12:45:10.173466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.772 [2024-07-25 12:45:10.173488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.772 [2024-07-25 12:45:10.173498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.772 [2024-07-25 12:45:10.173507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.772 [2024-07-25 12:45:10.173527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.772 qpair failed and we were unable to recover it. 00:32:36.772 [2024-07-25 12:45:10.183255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:36.772 [2024-07-25 12:45:10.183330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:36.772 [2024-07-25 12:45:10.183352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:36.772 [2024-07-25 12:45:10.183362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.772 [2024-07-25 12:45:10.183371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:36.772 [2024-07-25 12:45:10.183391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:36.772 qpair failed and we were unable to recover it. 00:32:37.033 [2024-07-25 12:45:10.193577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.033 [2024-07-25 12:45:10.193710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.033 [2024-07-25 12:45:10.193732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.033 [2024-07-25 12:45:10.193743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.033 [2024-07-25 12:45:10.193752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.033 [2024-07-25 12:45:10.193772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.033 qpair failed and we were unable to recover it. 00:32:37.033 [2024-07-25 12:45:10.203381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.033 [2024-07-25 12:45:10.203465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.033 [2024-07-25 12:45:10.203487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.033 [2024-07-25 12:45:10.203502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.033 [2024-07-25 12:45:10.203511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.033 [2024-07-25 12:45:10.203531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.033 qpair failed and we were unable to recover it. 00:32:37.033 [2024-07-25 12:45:10.213351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.033 [2024-07-25 12:45:10.213431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.033 [2024-07-25 12:45:10.213453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.033 [2024-07-25 12:45:10.213463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.033 [2024-07-25 12:45:10.213474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.033 [2024-07-25 12:45:10.213495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.033 qpair failed and we were unable to recover it. 00:32:37.033 [2024-07-25 12:45:10.223377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.033 [2024-07-25 12:45:10.223461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.033 [2024-07-25 12:45:10.223481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.033 [2024-07-25 12:45:10.223490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.033 [2024-07-25 12:45:10.223500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.033 [2024-07-25 12:45:10.223521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.033 qpair failed and we were unable to recover it. 00:32:37.033 [2024-07-25 12:45:10.233694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.033 [2024-07-25 12:45:10.233802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.033 [2024-07-25 12:45:10.233824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.033 [2024-07-25 12:45:10.233835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.233844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.233864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.243396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.243485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.243506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.243516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.243525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.243550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.253468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.253553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.253574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.253587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.253596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.253616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.263473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.263558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.263578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.263589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.263598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.263618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.273802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.273920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.273942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.273952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.273961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.273981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.283559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.283643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.283664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.283675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.283684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.283704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.293492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.293576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.293606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.293617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.293626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.293653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.303622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.303701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.303723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.303733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.303743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.303763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.313950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.314065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.314086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.314096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.314106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.314125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.323670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.323762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.323783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.323794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.323803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.323823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.333705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.333786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.333809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.333819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.333829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.333853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.343914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.344013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.344034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.344045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.344054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.344074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.354209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.354352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.034 [2024-07-25 12:45:10.354374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.034 [2024-07-25 12:45:10.354385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.034 [2024-07-25 12:45:10.354394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.034 [2024-07-25 12:45:10.354414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.034 qpair failed and we were unable to recover it. 00:32:37.034 [2024-07-25 12:45:10.363762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.034 [2024-07-25 12:45:10.363852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.035 [2024-07-25 12:45:10.363874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.035 [2024-07-25 12:45:10.363884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.035 [2024-07-25 12:45:10.363893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.035 [2024-07-25 12:45:10.363914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.035 qpair failed and we were unable to recover it. 00:32:37.035 [2024-07-25 12:45:10.373919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.035 [2024-07-25 12:45:10.373994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.035 [2024-07-25 12:45:10.374016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.035 [2024-07-25 12:45:10.374027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.035 [2024-07-25 12:45:10.374037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.035 [2024-07-25 12:45:10.374057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.035 qpair failed and we were unable to recover it. 00:32:37.035 [2024-07-25 12:45:10.383822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.035 [2024-07-25 12:45:10.383897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.035 [2024-07-25 12:45:10.383924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.035 [2024-07-25 12:45:10.383934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.035 [2024-07-25 12:45:10.383944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.035 [2024-07-25 12:45:10.383965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.035 qpair failed and we were unable to recover it. 00:32:37.035 [2024-07-25 12:45:10.394095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.035 [2024-07-25 12:45:10.394210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.035 [2024-07-25 12:45:10.394232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.035 [2024-07-25 12:45:10.394242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.035 [2024-07-25 12:45:10.394251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.035 [2024-07-25 12:45:10.394271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.035 qpair failed and we were unable to recover it. 00:32:37.035 [2024-07-25 12:45:10.403959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.035 [2024-07-25 12:45:10.404048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.035 [2024-07-25 12:45:10.404071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.035 [2024-07-25 12:45:10.404081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.035 [2024-07-25 12:45:10.404090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.035 [2024-07-25 12:45:10.404111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.035 qpair failed and we were unable to recover it. 00:32:37.035 [2024-07-25 12:45:10.413984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.035 [2024-07-25 12:45:10.414065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.035 [2024-07-25 12:45:10.414087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.035 [2024-07-25 12:45:10.414098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.035 [2024-07-25 12:45:10.414107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.035 [2024-07-25 12:45:10.414127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.035 qpair failed and we were unable to recover it. 00:32:37.035 [2024-07-25 12:45:10.424021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.035 [2024-07-25 12:45:10.424099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.035 [2024-07-25 12:45:10.424120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.035 [2024-07-25 12:45:10.424139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.035 [2024-07-25 12:45:10.424154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.035 [2024-07-25 12:45:10.424174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.035 qpair failed and we were unable to recover it. 00:32:37.035 [2024-07-25 12:45:10.434350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.035 [2024-07-25 12:45:10.434464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.035 [2024-07-25 12:45:10.434486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.035 [2024-07-25 12:45:10.434496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.035 [2024-07-25 12:45:10.434506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.035 [2024-07-25 12:45:10.434526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.035 qpair failed and we were unable to recover it. 00:32:37.035 [2024-07-25 12:45:10.444095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.035 [2024-07-25 12:45:10.444186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.035 [2024-07-25 12:45:10.444207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.035 [2024-07-25 12:45:10.444218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.035 [2024-07-25 12:45:10.444227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.035 [2024-07-25 12:45:10.444247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.035 qpair failed and we were unable to recover it. 00:32:37.297 [2024-07-25 12:45:10.454008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.297 [2024-07-25 12:45:10.454108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.297 [2024-07-25 12:45:10.454130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.297 [2024-07-25 12:45:10.454141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.297 [2024-07-25 12:45:10.454150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.297 [2024-07-25 12:45:10.454170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.297 qpair failed and we were unable to recover it. 00:32:37.297 [2024-07-25 12:45:10.464132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.297 [2024-07-25 12:45:10.464209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.297 [2024-07-25 12:45:10.464231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.297 [2024-07-25 12:45:10.464241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.297 [2024-07-25 12:45:10.464251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.297 [2024-07-25 12:45:10.464270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.297 qpair failed and we were unable to recover it. 00:32:37.297 [2024-07-25 12:45:10.474468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.297 [2024-07-25 12:45:10.474599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.297 [2024-07-25 12:45:10.474622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.297 [2024-07-25 12:45:10.474632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.297 [2024-07-25 12:45:10.474642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.297 [2024-07-25 12:45:10.474662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.297 qpair failed and we were unable to recover it. 00:32:37.297 [2024-07-25 12:45:10.484108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.297 [2024-07-25 12:45:10.484235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.297 [2024-07-25 12:45:10.484256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.297 [2024-07-25 12:45:10.484266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.297 [2024-07-25 12:45:10.484275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.297 [2024-07-25 12:45:10.484295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.297 qpair failed and we were unable to recover it. 00:32:37.297 [2024-07-25 12:45:10.494219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.297 [2024-07-25 12:45:10.494294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.297 [2024-07-25 12:45:10.494315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.297 [2024-07-25 12:45:10.494325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.297 [2024-07-25 12:45:10.494334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.297 [2024-07-25 12:45:10.494354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.297 qpair failed and we were unable to recover it. 00:32:37.297 [2024-07-25 12:45:10.504149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.297 [2024-07-25 12:45:10.504219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.297 [2024-07-25 12:45:10.504240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.297 [2024-07-25 12:45:10.504251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.297 [2024-07-25 12:45:10.504260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.297 [2024-07-25 12:45:10.504287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.297 qpair failed and we were unable to recover it. 00:32:37.297 [2024-07-25 12:45:10.514588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.297 [2024-07-25 12:45:10.514700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.297 [2024-07-25 12:45:10.514723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.297 [2024-07-25 12:45:10.514733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.297 [2024-07-25 12:45:10.514748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.297 [2024-07-25 12:45:10.514768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.297 qpair failed and we were unable to recover it. 00:32:37.297 [2024-07-25 12:45:10.524317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.297 [2024-07-25 12:45:10.524406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.297 [2024-07-25 12:45:10.524428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.297 [2024-07-25 12:45:10.524438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.297 [2024-07-25 12:45:10.524448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.297 [2024-07-25 12:45:10.524467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.297 qpair failed and we were unable to recover it. 00:32:37.297 [2024-07-25 12:45:10.534352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.297 [2024-07-25 12:45:10.534428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.297 [2024-07-25 12:45:10.534449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.297 [2024-07-25 12:45:10.534459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.297 [2024-07-25 12:45:10.534468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.297 [2024-07-25 12:45:10.534487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.297 qpair failed and we were unable to recover it. 00:32:37.297 [2024-07-25 12:45:10.544281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.297 [2024-07-25 12:45:10.544396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.297 [2024-07-25 12:45:10.544417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.297 [2024-07-25 12:45:10.544428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.297 [2024-07-25 12:45:10.544437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.297 [2024-07-25 12:45:10.544458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.297 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.554690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.554810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.554831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.554842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.554852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.554873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.564453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.564553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.564575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.564586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.564596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.564617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.574406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.574513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.574535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.574551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.574562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.574582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.584579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.584660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.584681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.584692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.584701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.584722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.594862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.594975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.594997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.595008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.595017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.595036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.604638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.604765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.604786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.604802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.604811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.604831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.614626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.614736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.614757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.614768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.614777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.614797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.624644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.624746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.624766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.624777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.624786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.624806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.634914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.635064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.635086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.635097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.635106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.635128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.644689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.644794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.644816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.644826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.644835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.644855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.654734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.654810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.654831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.654842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.654852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.654874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.664771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.664841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.664861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.664871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.664880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.298 [2024-07-25 12:45:10.664900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.298 qpair failed and we were unable to recover it. 00:32:37.298 [2024-07-25 12:45:10.675096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.298 [2024-07-25 12:45:10.675211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.298 [2024-07-25 12:45:10.675231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.298 [2024-07-25 12:45:10.675241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.298 [2024-07-25 12:45:10.675250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.299 [2024-07-25 12:45:10.675270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.299 qpair failed and we were unable to recover it. 00:32:37.299 [2024-07-25 12:45:10.684833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.299 [2024-07-25 12:45:10.684922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.299 [2024-07-25 12:45:10.684942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.299 [2024-07-25 12:45:10.684953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.299 [2024-07-25 12:45:10.684962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.299 [2024-07-25 12:45:10.684982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.299 qpair failed and we were unable to recover it. 00:32:37.299 [2024-07-25 12:45:10.694878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.299 [2024-07-25 12:45:10.694958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.299 [2024-07-25 12:45:10.694982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.299 [2024-07-25 12:45:10.694993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.299 [2024-07-25 12:45:10.695003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.299 [2024-07-25 12:45:10.695022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.299 qpair failed and we were unable to recover it. 00:32:37.299 [2024-07-25 12:45:10.704904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.299 [2024-07-25 12:45:10.704979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.299 [2024-07-25 12:45:10.704999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.299 [2024-07-25 12:45:10.705009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.299 [2024-07-25 12:45:10.705019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.299 [2024-07-25 12:45:10.705038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.299 qpair failed and we were unable to recover it. 00:32:37.299 [2024-07-25 12:45:10.715238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.715348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.715369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.561 [2024-07-25 12:45:10.715380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.561 [2024-07-25 12:45:10.715389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.561 [2024-07-25 12:45:10.715409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.561 qpair failed and we were unable to recover it. 00:32:37.561 [2024-07-25 12:45:10.724847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.724937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.724957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.561 [2024-07-25 12:45:10.724967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.561 [2024-07-25 12:45:10.724977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.561 [2024-07-25 12:45:10.724997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.561 qpair failed and we were unable to recover it. 00:32:37.561 [2024-07-25 12:45:10.735048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.735119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.735139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.561 [2024-07-25 12:45:10.735149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.561 [2024-07-25 12:45:10.735158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.561 [2024-07-25 12:45:10.735183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.561 qpair failed and we were unable to recover it. 00:32:37.561 [2024-07-25 12:45:10.745039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.745114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.745134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.561 [2024-07-25 12:45:10.745145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.561 [2024-07-25 12:45:10.745155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.561 [2024-07-25 12:45:10.745175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.561 qpair failed and we were unable to recover it. 00:32:37.561 [2024-07-25 12:45:10.755329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.755470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.755490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.561 [2024-07-25 12:45:10.755500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.561 [2024-07-25 12:45:10.755509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.561 [2024-07-25 12:45:10.755528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.561 qpair failed and we were unable to recover it. 00:32:37.561 [2024-07-25 12:45:10.765106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.765210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.765230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.561 [2024-07-25 12:45:10.765241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.561 [2024-07-25 12:45:10.765250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.561 [2024-07-25 12:45:10.765269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.561 qpair failed and we were unable to recover it. 00:32:37.561 [2024-07-25 12:45:10.775134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.775235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.775256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.561 [2024-07-25 12:45:10.775266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.561 [2024-07-25 12:45:10.775275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.561 [2024-07-25 12:45:10.775294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.561 qpair failed and we were unable to recover it. 00:32:37.561 [2024-07-25 12:45:10.785183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.785271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.785296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.561 [2024-07-25 12:45:10.785306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.561 [2024-07-25 12:45:10.785315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.561 [2024-07-25 12:45:10.785336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.561 qpair failed and we were unable to recover it. 00:32:37.561 [2024-07-25 12:45:10.795446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.795564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.795585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.561 [2024-07-25 12:45:10.795595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.561 [2024-07-25 12:45:10.795604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.561 [2024-07-25 12:45:10.795623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.561 qpair failed and we were unable to recover it. 00:32:37.561 [2024-07-25 12:45:10.805176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.805265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.805285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.561 [2024-07-25 12:45:10.805295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.561 [2024-07-25 12:45:10.805304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.561 [2024-07-25 12:45:10.805323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.561 qpair failed and we were unable to recover it. 00:32:37.561 [2024-07-25 12:45:10.815239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.815316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.815336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.561 [2024-07-25 12:45:10.815346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.561 [2024-07-25 12:45:10.815355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.561 [2024-07-25 12:45:10.815375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.561 qpair failed and we were unable to recover it. 00:32:37.561 [2024-07-25 12:45:10.825249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.561 [2024-07-25 12:45:10.825327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.561 [2024-07-25 12:45:10.825349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.825366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.825375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.825401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.835579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.835692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.835713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.835724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.835733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.835753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.845317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.845400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.845421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.845431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.845440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.845461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.855403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.855482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.855502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.855513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.855522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.855542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.865363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.865443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.865463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.865473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.865483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.865503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.875690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.875813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.875834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.875844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.875853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.875872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.885469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.885565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.885585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.885595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.885604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.885624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.895503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.895587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.895608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.895618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.895627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.895646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.905448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.905524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.905544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.905560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.905569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.905590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.915734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.915846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.915867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.915877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.915890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.915910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.925590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.925685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.925705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.925716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.925724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.925744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.935602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.935675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.935695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.935705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.935714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.935734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.945591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.562 [2024-07-25 12:45:10.945721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.562 [2024-07-25 12:45:10.945742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.562 [2024-07-25 12:45:10.945752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.562 [2024-07-25 12:45:10.945761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.562 [2024-07-25 12:45:10.945781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.562 qpair failed and we were unable to recover it. 00:32:37.562 [2024-07-25 12:45:10.955979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.563 [2024-07-25 12:45:10.956094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.563 [2024-07-25 12:45:10.956115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.563 [2024-07-25 12:45:10.956126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.563 [2024-07-25 12:45:10.956134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.563 [2024-07-25 12:45:10.956153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.563 qpair failed and we were unable to recover it. 00:32:37.563 [2024-07-25 12:45:10.965633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.563 [2024-07-25 12:45:10.965734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.563 [2024-07-25 12:45:10.965754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.563 [2024-07-25 12:45:10.965765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.563 [2024-07-25 12:45:10.965773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.563 [2024-07-25 12:45:10.965793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.563 qpair failed and we were unable to recover it. 00:32:37.563 [2024-07-25 12:45:10.975747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.563 [2024-07-25 12:45:10.975848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.563 [2024-07-25 12:45:10.975868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.563 [2024-07-25 12:45:10.975879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.563 [2024-07-25 12:45:10.975888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.563 [2024-07-25 12:45:10.975907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.563 qpair failed and we were unable to recover it. 00:32:37.825 [2024-07-25 12:45:10.985782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.825 [2024-07-25 12:45:10.985895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.825 [2024-07-25 12:45:10.985916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.825 [2024-07-25 12:45:10.985926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.825 [2024-07-25 12:45:10.985935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.825 [2024-07-25 12:45:10.985954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.825 qpair failed and we were unable to recover it. 00:32:37.825 [2024-07-25 12:45:10.996064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.825 [2024-07-25 12:45:10.996173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.825 [2024-07-25 12:45:10.996194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.825 [2024-07-25 12:45:10.996203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.825 [2024-07-25 12:45:10.996212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.825 [2024-07-25 12:45:10.996233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.825 qpair failed and we were unable to recover it. 00:32:37.825 [2024-07-25 12:45:11.005787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.825 [2024-07-25 12:45:11.005887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.825 [2024-07-25 12:45:11.005907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.825 [2024-07-25 12:45:11.005923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.825 [2024-07-25 12:45:11.005932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.825 [2024-07-25 12:45:11.005951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.825 qpair failed and we were unable to recover it. 00:32:37.825 [2024-07-25 12:45:11.015898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.825 [2024-07-25 12:45:11.015975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.825 [2024-07-25 12:45:11.015995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.825 [2024-07-25 12:45:11.016005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.825 [2024-07-25 12:45:11.016015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.825 [2024-07-25 12:45:11.016034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.825 qpair failed and we were unable to recover it. 00:32:37.825 [2024-07-25 12:45:11.025924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.825 [2024-07-25 12:45:11.026041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.825 [2024-07-25 12:45:11.026061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.825 [2024-07-25 12:45:11.026071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.825 [2024-07-25 12:45:11.026079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.825 [2024-07-25 12:45:11.026098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.825 qpair failed and we were unable to recover it. 00:32:37.825 [2024-07-25 12:45:11.036247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.825 [2024-07-25 12:45:11.036367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.825 [2024-07-25 12:45:11.036387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.825 [2024-07-25 12:45:11.036397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.825 [2024-07-25 12:45:11.036405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.825 [2024-07-25 12:45:11.036424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.825 qpair failed and we were unable to recover it. 00:32:37.825 [2024-07-25 12:45:11.046022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.825 [2024-07-25 12:45:11.046105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.825 [2024-07-25 12:45:11.046125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.825 [2024-07-25 12:45:11.046135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.825 [2024-07-25 12:45:11.046145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.825 [2024-07-25 12:45:11.046164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.825 qpair failed and we were unable to recover it. 00:32:37.825 [2024-07-25 12:45:11.056003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.825 [2024-07-25 12:45:11.056076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.825 [2024-07-25 12:45:11.056096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.825 [2024-07-25 12:45:11.056105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.825 [2024-07-25 12:45:11.056115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.825 [2024-07-25 12:45:11.056134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.825 qpair failed and we were unable to recover it. 00:32:37.825 [2024-07-25 12:45:11.066102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.066176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.066195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.066205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.066213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.066233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.076370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.076483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.076504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.076514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.076522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.076542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.086107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.086200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.086220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.086230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.086239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.086259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.096022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.096101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.096125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.096136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.096144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.096164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.106221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.106291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.106311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.106321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.106330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.106349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.116492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.116618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.116639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.116649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.116658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.116677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.126257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.126341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.126361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.126371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.126380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.126399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.136171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.136244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.136264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.136274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.136283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.136314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.146309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.146381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.146402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.146411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.146420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.146440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.156631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.156744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.156764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.156775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.156784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.156804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.166383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.166496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.166517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.166527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.166536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.166562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.176428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.176500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.826 [2024-07-25 12:45:11.176522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.826 [2024-07-25 12:45:11.176532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.826 [2024-07-25 12:45:11.176540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.826 [2024-07-25 12:45:11.176569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.826 qpair failed and we were unable to recover it. 00:32:37.826 [2024-07-25 12:45:11.186450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.826 [2024-07-25 12:45:11.186563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.827 [2024-07-25 12:45:11.186588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.827 [2024-07-25 12:45:11.186599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.827 [2024-07-25 12:45:11.186607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.827 [2024-07-25 12:45:11.186627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.827 qpair failed and we were unable to recover it. 00:32:37.827 [2024-07-25 12:45:11.196765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.827 [2024-07-25 12:45:11.196875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.827 [2024-07-25 12:45:11.196895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.827 [2024-07-25 12:45:11.196904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.827 [2024-07-25 12:45:11.196913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.827 [2024-07-25 12:45:11.196934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.827 qpair failed and we were unable to recover it. 00:32:37.827 [2024-07-25 12:45:11.206480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.827 [2024-07-25 12:45:11.206573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.827 [2024-07-25 12:45:11.206594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.827 [2024-07-25 12:45:11.206603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.827 [2024-07-25 12:45:11.206613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.827 [2024-07-25 12:45:11.206632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.827 qpair failed and we were unable to recover it. 00:32:37.827 [2024-07-25 12:45:11.216523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.827 [2024-07-25 12:45:11.216613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.827 [2024-07-25 12:45:11.216633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.827 [2024-07-25 12:45:11.216643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.827 [2024-07-25 12:45:11.216652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.827 [2024-07-25 12:45:11.216672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.827 qpair failed and we were unable to recover it. 00:32:37.827 [2024-07-25 12:45:11.226577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.827 [2024-07-25 12:45:11.226654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.827 [2024-07-25 12:45:11.226674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.827 [2024-07-25 12:45:11.226684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.827 [2024-07-25 12:45:11.226693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.827 [2024-07-25 12:45:11.226716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.827 qpair failed and we were unable to recover it. 00:32:37.827 [2024-07-25 12:45:11.236910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.827 [2024-07-25 12:45:11.237029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.827 [2024-07-25 12:45:11.237049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.827 [2024-07-25 12:45:11.237059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.827 [2024-07-25 12:45:11.237068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:37.827 [2024-07-25 12:45:11.237087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:37.827 qpair failed and we were unable to recover it. 00:32:38.088 [2024-07-25 12:45:11.246560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.088 [2024-07-25 12:45:11.246644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.088 [2024-07-25 12:45:11.246665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.088 [2024-07-25 12:45:11.246675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.088 [2024-07-25 12:45:11.246685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.088 [2024-07-25 12:45:11.246712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.088 qpair failed and we were unable to recover it. 00:32:38.088 [2024-07-25 12:45:11.256540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.088 [2024-07-25 12:45:11.256626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.088 [2024-07-25 12:45:11.256646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.088 [2024-07-25 12:45:11.256656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.088 [2024-07-25 12:45:11.256666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.088 [2024-07-25 12:45:11.256686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.088 qpair failed and we were unable to recover it. 00:32:38.088 [2024-07-25 12:45:11.266661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.088 [2024-07-25 12:45:11.266732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.088 [2024-07-25 12:45:11.266751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.088 [2024-07-25 12:45:11.266761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.088 [2024-07-25 12:45:11.266770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.088 [2024-07-25 12:45:11.266790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.088 qpair failed and we were unable to recover it. 00:32:38.088 [2024-07-25 12:45:11.277004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.277149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.277174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.277184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.277192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.277212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.286733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.286840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.286860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.286871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.286880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.286899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.296742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.296817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.296837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.296847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.296856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.296875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.306774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.306844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.306864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.306874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.306882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.306901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.317128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.317247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.317267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.317278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.317292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.317311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.326858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.326944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.326964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.326974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.326982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.327002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.336804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.336874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.336894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.336904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.336912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.336933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.346931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.347007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.347027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.347037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.347046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.347066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.357270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.357390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.357410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.357420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.357429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.357449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.367020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.367139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.367160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.367170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.367180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.367199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.377064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.377173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.377193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.377203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.377212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.377232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.387116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.387195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.387214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.387224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.387235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.387255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.089 [2024-07-25 12:45:11.397421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.089 [2024-07-25 12:45:11.397595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.089 [2024-07-25 12:45:11.397615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.089 [2024-07-25 12:45:11.397625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.089 [2024-07-25 12:45:11.397634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.089 [2024-07-25 12:45:11.397654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.089 qpair failed and we were unable to recover it. 00:32:38.090 [2024-07-25 12:45:11.407134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.090 [2024-07-25 12:45:11.407230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.090 [2024-07-25 12:45:11.407250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.090 [2024-07-25 12:45:11.407265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.090 [2024-07-25 12:45:11.407274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.090 [2024-07-25 12:45:11.407293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.090 qpair failed and we were unable to recover it. 00:32:38.090 [2024-07-25 12:45:11.417155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.090 [2024-07-25 12:45:11.417230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.090 [2024-07-25 12:45:11.417250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.090 [2024-07-25 12:45:11.417260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.090 [2024-07-25 12:45:11.417270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.090 [2024-07-25 12:45:11.417289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.090 qpair failed and we were unable to recover it. 00:32:38.090 [2024-07-25 12:45:11.427210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.090 [2024-07-25 12:45:11.427286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.090 [2024-07-25 12:45:11.427307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.090 [2024-07-25 12:45:11.427317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.090 [2024-07-25 12:45:11.427327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.090 [2024-07-25 12:45:11.427347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.090 qpair failed and we were unable to recover it. 00:32:38.090 [2024-07-25 12:45:11.437507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.090 [2024-07-25 12:45:11.437640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.090 [2024-07-25 12:45:11.437660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.090 [2024-07-25 12:45:11.437670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.090 [2024-07-25 12:45:11.437679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.090 [2024-07-25 12:45:11.437699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.090 qpair failed and we were unable to recover it. 00:32:38.090 [2024-07-25 12:45:11.447259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.090 [2024-07-25 12:45:11.447348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.090 [2024-07-25 12:45:11.447368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.090 [2024-07-25 12:45:11.447378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.090 [2024-07-25 12:45:11.447388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.090 [2024-07-25 12:45:11.447407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.090 qpair failed and we were unable to recover it. 00:32:38.090 [2024-07-25 12:45:11.457296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.090 [2024-07-25 12:45:11.457373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.090 [2024-07-25 12:45:11.457393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.090 [2024-07-25 12:45:11.457403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.090 [2024-07-25 12:45:11.457413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.090 [2024-07-25 12:45:11.457433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.090 qpair failed and we were unable to recover it. 00:32:38.090 [2024-07-25 12:45:11.467229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.090 [2024-07-25 12:45:11.467302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.090 [2024-07-25 12:45:11.467323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.090 [2024-07-25 12:45:11.467332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.090 [2024-07-25 12:45:11.467343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.090 [2024-07-25 12:45:11.467363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.090 qpair failed and we were unable to recover it. 00:32:38.090 [2024-07-25 12:45:11.477650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.090 [2024-07-25 12:45:11.477769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.090 [2024-07-25 12:45:11.477790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.090 [2024-07-25 12:45:11.477800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.090 [2024-07-25 12:45:11.477809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.090 [2024-07-25 12:45:11.477829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.090 qpair failed and we were unable to recover it. 00:32:38.090 [2024-07-25 12:45:11.487397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.090 [2024-07-25 12:45:11.487513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.090 [2024-07-25 12:45:11.487534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.090 [2024-07-25 12:45:11.487544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.090 [2024-07-25 12:45:11.487558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.090 [2024-07-25 12:45:11.487578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.090 qpair failed and we were unable to recover it. 00:32:38.090 [2024-07-25 12:45:11.497407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.090 [2024-07-25 12:45:11.497531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.090 [2024-07-25 12:45:11.497558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.090 [2024-07-25 12:45:11.497573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.090 [2024-07-25 12:45:11.497582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.090 [2024-07-25 12:45:11.497603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.090 qpair failed and we were unable to recover it. 00:32:38.352 [2024-07-25 12:45:11.507445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.352 [2024-07-25 12:45:11.507536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.352 [2024-07-25 12:45:11.507561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.352 [2024-07-25 12:45:11.507571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.352 [2024-07-25 12:45:11.507580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.352 [2024-07-25 12:45:11.507601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.352 qpair failed and we were unable to recover it. 00:32:38.352 [2024-07-25 12:45:11.517753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.352 [2024-07-25 12:45:11.517868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.352 [2024-07-25 12:45:11.517888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.352 [2024-07-25 12:45:11.517899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.352 [2024-07-25 12:45:11.517907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.352 [2024-07-25 12:45:11.517927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.352 qpair failed and we were unable to recover it. 00:32:38.352 [2024-07-25 12:45:11.527585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.352 [2024-07-25 12:45:11.527699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.352 [2024-07-25 12:45:11.527719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.352 [2024-07-25 12:45:11.527729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.352 [2024-07-25 12:45:11.527738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.352 [2024-07-25 12:45:11.527758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.352 qpair failed and we were unable to recover it. 00:32:38.352 [2024-07-25 12:45:11.537465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.352 [2024-07-25 12:45:11.537543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.537569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.537579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.537588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.537609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.547584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.353 [2024-07-25 12:45:11.547661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.547681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.547690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.547699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.547719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.557905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.353 [2024-07-25 12:45:11.558017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.558037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.558047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.558057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.558077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.567591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.353 [2024-07-25 12:45:11.567678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.567698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.567708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.567717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.567737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.577579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.353 [2024-07-25 12:45:11.577656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.577676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.577686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.577695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.577716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.587769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.353 [2024-07-25 12:45:11.587852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.587877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.587887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.587896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.587916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.598058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.353 [2024-07-25 12:45:11.598168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.598188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.598198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.598206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.598226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.607796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.353 [2024-07-25 12:45:11.607890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.607910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.607920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.607929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.607948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.617811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.353 [2024-07-25 12:45:11.617890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.617910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.617920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.617929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.617950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.627821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.353 [2024-07-25 12:45:11.627891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.627911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.627921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.627930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.627955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.638169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.353 [2024-07-25 12:45:11.638284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.638304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.638314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.638322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.638342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.647881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.353 [2024-07-25 12:45:11.647961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.353 [2024-07-25 12:45:11.647982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.353 [2024-07-25 12:45:11.647992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.353 [2024-07-25 12:45:11.648000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.353 [2024-07-25 12:45:11.648021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.353 qpair failed and we were unable to recover it. 00:32:38.353 [2024-07-25 12:45:11.657908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.657983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.658004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.658014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.658023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.658043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.354 [2024-07-25 12:45:11.667965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.668069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.668090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.668100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.668108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.668128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.354 [2024-07-25 12:45:11.678268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.678376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.678404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.678414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.678423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.678442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.354 [2024-07-25 12:45:11.688015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.688109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.688129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.688140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.688148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.688167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.354 [2024-07-25 12:45:11.698042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.698120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.698141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.698151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.698160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.698180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.354 [2024-07-25 12:45:11.708056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.708165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.708185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.708195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.708205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.708224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.354 [2024-07-25 12:45:11.718293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.718409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.718429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.718439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.718452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.718471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.354 [2024-07-25 12:45:11.728204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.728332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.728352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.728362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.728371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.728390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.354 [2024-07-25 12:45:11.738156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.738253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.738273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.738285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.738295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.738315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.354 [2024-07-25 12:45:11.748192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.748295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.748318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.748329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.748338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.748358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.354 [2024-07-25 12:45:11.758571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.758693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.758715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.758726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.758736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.758757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.354 [2024-07-25 12:45:11.768279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.354 [2024-07-25 12:45:11.768375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.354 [2024-07-25 12:45:11.768396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.354 [2024-07-25 12:45:11.768407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.354 [2024-07-25 12:45:11.768416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.354 [2024-07-25 12:45:11.768436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.354 qpair failed and we were unable to recover it. 00:32:38.616 [2024-07-25 12:45:11.778271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.616 [2024-07-25 12:45:11.778350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.616 [2024-07-25 12:45:11.778372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.616 [2024-07-25 12:45:11.778382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.616 [2024-07-25 12:45:11.778392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.616 [2024-07-25 12:45:11.778412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.616 qpair failed and we were unable to recover it. 00:32:38.616 [2024-07-25 12:45:11.788244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.616 [2024-07-25 12:45:11.788326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.616 [2024-07-25 12:45:11.788352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.616 [2024-07-25 12:45:11.788363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.616 [2024-07-25 12:45:11.788373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.616 [2024-07-25 12:45:11.788400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.616 qpair failed and we were unable to recover it. 00:32:38.616 [2024-07-25 12:45:11.798640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.616 [2024-07-25 12:45:11.798760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.616 [2024-07-25 12:45:11.798782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.616 [2024-07-25 12:45:11.798793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.616 [2024-07-25 12:45:11.798802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.798823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.808280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.808367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.808389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.808404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.808413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.808433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.818411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.818487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.818509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.818520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.818529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.818555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.828473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.828570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.828592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.828603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.828612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.828632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.838778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.838891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.838913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.838923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.838933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.838953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.848501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.848614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.848635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.848646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.848655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.848675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.858544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.858632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.858656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.858671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.858680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.858700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.868561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.868637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.868659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.868671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.868681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.868701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.878792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.878903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.878925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.878936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.878945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.878966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.888668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.888758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.888780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.888790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.888799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.888820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.898677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.898759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.898781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.898797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.898806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.898826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.908734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.908812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.908834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.908845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.908857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.908877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.919044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.919192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.919213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.919224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.617 [2024-07-25 12:45:11.919233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.617 [2024-07-25 12:45:11.919253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.617 qpair failed and we were unable to recover it. 00:32:38.617 [2024-07-25 12:45:11.928784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.617 [2024-07-25 12:45:11.928877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.617 [2024-07-25 12:45:11.928899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.617 [2024-07-25 12:45:11.928910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.618 [2024-07-25 12:45:11.928919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.618 [2024-07-25 12:45:11.928939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.618 qpair failed and we were unable to recover it. 00:32:38.618 [2024-07-25 12:45:11.938804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.618 [2024-07-25 12:45:11.938876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.618 [2024-07-25 12:45:11.938896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.618 [2024-07-25 12:45:11.938907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.618 [2024-07-25 12:45:11.938916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.618 [2024-07-25 12:45:11.938936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.618 qpair failed and we were unable to recover it. 00:32:38.618 [2024-07-25 12:45:11.948850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.618 [2024-07-25 12:45:11.948930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.618 [2024-07-25 12:45:11.948952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.618 [2024-07-25 12:45:11.948962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.618 [2024-07-25 12:45:11.948971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.618 [2024-07-25 12:45:11.948991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.618 qpair failed and we were unable to recover it. 00:32:38.618 [2024-07-25 12:45:11.959248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.618 [2024-07-25 12:45:11.959362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.618 [2024-07-25 12:45:11.959384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.618 [2024-07-25 12:45:11.959394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.618 [2024-07-25 12:45:11.959403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.618 [2024-07-25 12:45:11.959423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.618 qpair failed and we were unable to recover it. 00:32:38.618 [2024-07-25 12:45:11.968917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.618 [2024-07-25 12:45:11.969028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.618 [2024-07-25 12:45:11.969049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.618 [2024-07-25 12:45:11.969059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.618 [2024-07-25 12:45:11.969068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.618 [2024-07-25 12:45:11.969088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.618 qpair failed and we were unable to recover it. 00:32:38.618 [2024-07-25 12:45:11.978955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.618 [2024-07-25 12:45:11.979036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.618 [2024-07-25 12:45:11.979057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.618 [2024-07-25 12:45:11.979067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.618 [2024-07-25 12:45:11.979076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.618 [2024-07-25 12:45:11.979095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.618 qpair failed and we were unable to recover it. 00:32:38.618 [2024-07-25 12:45:11.988982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.618 [2024-07-25 12:45:11.989058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.618 [2024-07-25 12:45:11.989083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.618 [2024-07-25 12:45:11.989093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.618 [2024-07-25 12:45:11.989102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.618 [2024-07-25 12:45:11.989121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.618 qpair failed and we were unable to recover it. 00:32:38.618 [2024-07-25 12:45:11.999185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.618 [2024-07-25 12:45:11.999295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.618 [2024-07-25 12:45:11.999317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.618 [2024-07-25 12:45:11.999328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.618 [2024-07-25 12:45:11.999337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.618 [2024-07-25 12:45:11.999357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.618 qpair failed and we were unable to recover it. 00:32:38.618 [2024-07-25 12:45:12.009050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.618 [2024-07-25 12:45:12.009137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.618 [2024-07-25 12:45:12.009158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.618 [2024-07-25 12:45:12.009169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.618 [2024-07-25 12:45:12.009178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.618 [2024-07-25 12:45:12.009198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.618 qpair failed and we were unable to recover it. 00:32:38.618 [2024-07-25 12:45:12.019131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.618 [2024-07-25 12:45:12.019206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.618 [2024-07-25 12:45:12.019228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.618 [2024-07-25 12:45:12.019239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.618 [2024-07-25 12:45:12.019249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.618 [2024-07-25 12:45:12.019268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.618 qpair failed and we were unable to recover it. 00:32:38.618 [2024-07-25 12:45:12.029104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.618 [2024-07-25 12:45:12.029183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.618 [2024-07-25 12:45:12.029205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.618 [2024-07-25 12:45:12.029215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.618 [2024-07-25 12:45:12.029225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.618 [2024-07-25 12:45:12.029249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.618 qpair failed and we were unable to recover it. 00:32:38.880 [2024-07-25 12:45:12.039432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.880 [2024-07-25 12:45:12.039579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.880 [2024-07-25 12:45:12.039600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.880 [2024-07-25 12:45:12.039611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.880 [2024-07-25 12:45:12.039620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.880 [2024-07-25 12:45:12.039640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.880 qpair failed and we were unable to recover it. 00:32:38.880 [2024-07-25 12:45:12.049188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.880 [2024-07-25 12:45:12.049282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.880 [2024-07-25 12:45:12.049304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.880 [2024-07-25 12:45:12.049315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.880 [2024-07-25 12:45:12.049324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.880 [2024-07-25 12:45:12.049344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.880 qpair failed and we were unable to recover it. 00:32:38.880 [2024-07-25 12:45:12.059214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.880 [2024-07-25 12:45:12.059293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.880 [2024-07-25 12:45:12.059315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.880 [2024-07-25 12:45:12.059325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.880 [2024-07-25 12:45:12.059335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.880 [2024-07-25 12:45:12.059355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.880 qpair failed and we were unable to recover it. 00:32:38.880 [2024-07-25 12:45:12.069245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.880 [2024-07-25 12:45:12.069353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.880 [2024-07-25 12:45:12.069374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.880 [2024-07-25 12:45:12.069385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.880 [2024-07-25 12:45:12.069394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.880 [2024-07-25 12:45:12.069414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.880 qpair failed and we were unable to recover it. 00:32:38.880 [2024-07-25 12:45:12.079527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.880 [2024-07-25 12:45:12.079692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.880 [2024-07-25 12:45:12.079718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.880 [2024-07-25 12:45:12.079729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.880 [2024-07-25 12:45:12.079738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.880 [2024-07-25 12:45:12.079759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.880 qpair failed and we were unable to recover it. 00:32:38.880 [2024-07-25 12:45:12.089287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.880 [2024-07-25 12:45:12.089372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.880 [2024-07-25 12:45:12.089393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.880 [2024-07-25 12:45:12.089402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.880 [2024-07-25 12:45:12.089411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.880 [2024-07-25 12:45:12.089431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.880 qpair failed and we were unable to recover it. 00:32:38.880 [2024-07-25 12:45:12.099318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.880 [2024-07-25 12:45:12.099391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.880 [2024-07-25 12:45:12.099411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.099422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.099431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.099450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.881 qpair failed and we were unable to recover it. 00:32:38.881 [2024-07-25 12:45:12.109394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.881 [2024-07-25 12:45:12.109468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.881 [2024-07-25 12:45:12.109492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.109507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.109517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.109539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.881 qpair failed and we were unable to recover it. 00:32:38.881 [2024-07-25 12:45:12.119675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.881 [2024-07-25 12:45:12.119792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.881 [2024-07-25 12:45:12.119815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.119825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.119839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.119859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.881 qpair failed and we were unable to recover it. 00:32:38.881 [2024-07-25 12:45:12.129435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.881 [2024-07-25 12:45:12.129522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.881 [2024-07-25 12:45:12.129544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.129562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.129572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.129592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.881 qpair failed and we were unable to recover it. 00:32:38.881 [2024-07-25 12:45:12.139355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.881 [2024-07-25 12:45:12.139431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.881 [2024-07-25 12:45:12.139451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.139462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.139471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.139491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.881 qpair failed and we were unable to recover it. 00:32:38.881 [2024-07-25 12:45:12.149480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.881 [2024-07-25 12:45:12.149559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.881 [2024-07-25 12:45:12.149579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.149590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.149599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.149619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.881 qpair failed and we were unable to recover it. 00:32:38.881 [2024-07-25 12:45:12.159693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.881 [2024-07-25 12:45:12.159869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.881 [2024-07-25 12:45:12.159891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.159901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.159910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.159930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.881 qpair failed and we were unable to recover it. 00:32:38.881 [2024-07-25 12:45:12.169558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.881 [2024-07-25 12:45:12.169684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.881 [2024-07-25 12:45:12.169706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.169716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.169725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.169745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.881 qpair failed and we were unable to recover it. 00:32:38.881 [2024-07-25 12:45:12.179592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.881 [2024-07-25 12:45:12.179670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.881 [2024-07-25 12:45:12.179692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.179702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.179711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.179732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.881 qpair failed and we were unable to recover it. 00:32:38.881 [2024-07-25 12:45:12.189608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.881 [2024-07-25 12:45:12.189684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.881 [2024-07-25 12:45:12.189706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.189716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.189725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.189746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.881 qpair failed and we were unable to recover it. 00:32:38.881 [2024-07-25 12:45:12.199955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.881 [2024-07-25 12:45:12.200067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.881 [2024-07-25 12:45:12.200089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.200100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.200110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.200129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.881 qpair failed and we were unable to recover it. 00:32:38.881 [2024-07-25 12:45:12.209681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.881 [2024-07-25 12:45:12.209769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.881 [2024-07-25 12:45:12.209791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.881 [2024-07-25 12:45:12.209801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.881 [2024-07-25 12:45:12.209815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.881 [2024-07-25 12:45:12.209835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.882 qpair failed and we were unable to recover it. 00:32:38.882 [2024-07-25 12:45:12.219675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.882 [2024-07-25 12:45:12.219760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.882 [2024-07-25 12:45:12.219781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.882 [2024-07-25 12:45:12.219792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.882 [2024-07-25 12:45:12.219801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.882 [2024-07-25 12:45:12.219821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.882 qpair failed and we were unable to recover it. 00:32:38.882 [2024-07-25 12:45:12.229755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.882 [2024-07-25 12:45:12.229858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.882 [2024-07-25 12:45:12.229879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.882 [2024-07-25 12:45:12.229890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.882 [2024-07-25 12:45:12.229899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.882 [2024-07-25 12:45:12.229918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.882 qpair failed and we were unable to recover it. 00:32:38.882 [2024-07-25 12:45:12.239951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.882 [2024-07-25 12:45:12.240103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.882 [2024-07-25 12:45:12.240124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.882 [2024-07-25 12:45:12.240134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.882 [2024-07-25 12:45:12.240144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.882 [2024-07-25 12:45:12.240164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.882 qpair failed and we were unable to recover it. 00:32:38.882 [2024-07-25 12:45:12.249816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.882 [2024-07-25 12:45:12.249933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.882 [2024-07-25 12:45:12.249954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.882 [2024-07-25 12:45:12.249964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.882 [2024-07-25 12:45:12.249973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.882 [2024-07-25 12:45:12.249992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.882 qpair failed and we were unable to recover it. 00:32:38.882 [2024-07-25 12:45:12.259833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.882 [2024-07-25 12:45:12.259911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.882 [2024-07-25 12:45:12.259932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.882 [2024-07-25 12:45:12.259942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.882 [2024-07-25 12:45:12.259951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.882 [2024-07-25 12:45:12.259971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.882 qpair failed and we were unable to recover it. 00:32:38.882 [2024-07-25 12:45:12.269869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.882 [2024-07-25 12:45:12.269942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.882 [2024-07-25 12:45:12.269962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.882 [2024-07-25 12:45:12.269973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.882 [2024-07-25 12:45:12.269982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.882 [2024-07-25 12:45:12.270001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.882 qpair failed and we were unable to recover it. 00:32:38.882 [2024-07-25 12:45:12.280096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.882 [2024-07-25 12:45:12.280208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.882 [2024-07-25 12:45:12.280230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.882 [2024-07-25 12:45:12.280241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.882 [2024-07-25 12:45:12.280250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.882 [2024-07-25 12:45:12.280270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.882 qpair failed and we were unable to recover it. 00:32:38.882 [2024-07-25 12:45:12.289969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.882 [2024-07-25 12:45:12.290090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.882 [2024-07-25 12:45:12.290111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.882 [2024-07-25 12:45:12.290122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.882 [2024-07-25 12:45:12.290132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:38.882 [2024-07-25 12:45:12.290151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:38.882 qpair failed and we were unable to recover it. 00:32:39.145 [2024-07-25 12:45:12.300016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.145 [2024-07-25 12:45:12.300089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.145 [2024-07-25 12:45:12.300110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.145 [2024-07-25 12:45:12.300125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.145 [2024-07-25 12:45:12.300134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.145 [2024-07-25 12:45:12.300154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.145 qpair failed and we were unable to recover it. 00:32:39.145 [2024-07-25 12:45:12.310030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.145 [2024-07-25 12:45:12.310100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.145 [2024-07-25 12:45:12.310120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.145 [2024-07-25 12:45:12.310131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.145 [2024-07-25 12:45:12.310140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.145 [2024-07-25 12:45:12.310159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.145 qpair failed and we were unable to recover it. 00:32:39.145 [2024-07-25 12:45:12.320303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.145 [2024-07-25 12:45:12.320413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.145 [2024-07-25 12:45:12.320435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.145 [2024-07-25 12:45:12.320446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.145 [2024-07-25 12:45:12.320455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.145 [2024-07-25 12:45:12.320475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.145 qpair failed and we were unable to recover it. 00:32:39.145 [2024-07-25 12:45:12.330084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.145 [2024-07-25 12:45:12.330169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.145 [2024-07-25 12:45:12.330189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.145 [2024-07-25 12:45:12.330199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.145 [2024-07-25 12:45:12.330208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.145 [2024-07-25 12:45:12.330228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.145 qpair failed and we were unable to recover it. 00:32:39.145 [2024-07-25 12:45:12.340019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.145 [2024-07-25 12:45:12.340098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.145 [2024-07-25 12:45:12.340120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.145 [2024-07-25 12:45:12.340131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.145 [2024-07-25 12:45:12.340140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.145 [2024-07-25 12:45:12.340166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.145 qpair failed and we were unable to recover it. 00:32:39.145 [2024-07-25 12:45:12.350123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.145 [2024-07-25 12:45:12.350197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.145 [2024-07-25 12:45:12.350219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.145 [2024-07-25 12:45:12.350229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.145 [2024-07-25 12:45:12.350238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.145 [2024-07-25 12:45:12.350258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.145 qpair failed and we were unable to recover it. 00:32:39.145 [2024-07-25 12:45:12.360420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.145 [2024-07-25 12:45:12.360533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.145 [2024-07-25 12:45:12.360561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.145 [2024-07-25 12:45:12.360572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.145 [2024-07-25 12:45:12.360581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.145 [2024-07-25 12:45:12.360601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.145 qpair failed and we were unable to recover it. 00:32:39.145 [2024-07-25 12:45:12.370277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.145 [2024-07-25 12:45:12.370399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.145 [2024-07-25 12:45:12.370421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.145 [2024-07-25 12:45:12.370432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.145 [2024-07-25 12:45:12.370441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.145 [2024-07-25 12:45:12.370461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.145 qpair failed and we were unable to recover it. 00:32:39.145 [2024-07-25 12:45:12.380244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.145 [2024-07-25 12:45:12.380346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.145 [2024-07-25 12:45:12.380368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.145 [2024-07-25 12:45:12.380378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.145 [2024-07-25 12:45:12.380388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.145 [2024-07-25 12:45:12.380407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.145 qpair failed and we were unable to recover it. 00:32:39.145 [2024-07-25 12:45:12.390276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.145 [2024-07-25 12:45:12.390355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.145 [2024-07-25 12:45:12.390381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.145 [2024-07-25 12:45:12.390391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.145 [2024-07-25 12:45:12.390401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.145 [2024-07-25 12:45:12.390420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.145 qpair failed and we were unable to recover it. 00:32:39.145 [2024-07-25 12:45:12.400596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.145 [2024-07-25 12:45:12.400712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.145 [2024-07-25 12:45:12.400734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.145 [2024-07-25 12:45:12.400745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.145 [2024-07-25 12:45:12.400755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.145 [2024-07-25 12:45:12.400775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.410343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.410424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.146 [2024-07-25 12:45:12.410446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.146 [2024-07-25 12:45:12.410457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.146 [2024-07-25 12:45:12.410466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.146 [2024-07-25 12:45:12.410485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.420279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.420357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.146 [2024-07-25 12:45:12.420380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.146 [2024-07-25 12:45:12.420390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.146 [2024-07-25 12:45:12.420399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.146 [2024-07-25 12:45:12.420418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.430422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.430494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.146 [2024-07-25 12:45:12.430514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.146 [2024-07-25 12:45:12.430525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.146 [2024-07-25 12:45:12.430535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.146 [2024-07-25 12:45:12.430569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.440640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.440754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.146 [2024-07-25 12:45:12.440776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.146 [2024-07-25 12:45:12.440786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.146 [2024-07-25 12:45:12.440795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.146 [2024-07-25 12:45:12.440815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.450504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.450597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.146 [2024-07-25 12:45:12.450619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.146 [2024-07-25 12:45:12.450629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.146 [2024-07-25 12:45:12.450639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.146 [2024-07-25 12:45:12.450659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.460569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.460697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.146 [2024-07-25 12:45:12.460718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.146 [2024-07-25 12:45:12.460729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.146 [2024-07-25 12:45:12.460739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.146 [2024-07-25 12:45:12.460760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.470584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.470662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.146 [2024-07-25 12:45:12.470684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.146 [2024-07-25 12:45:12.470694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.146 [2024-07-25 12:45:12.470703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.146 [2024-07-25 12:45:12.470724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.480922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.481101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.146 [2024-07-25 12:45:12.481127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.146 [2024-07-25 12:45:12.481137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.146 [2024-07-25 12:45:12.481147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.146 [2024-07-25 12:45:12.481166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.490626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.490707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.146 [2024-07-25 12:45:12.490727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.146 [2024-07-25 12:45:12.490737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.146 [2024-07-25 12:45:12.490746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.146 [2024-07-25 12:45:12.490767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.500648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.500722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.146 [2024-07-25 12:45:12.500743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.146 [2024-07-25 12:45:12.500754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.146 [2024-07-25 12:45:12.500763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.146 [2024-07-25 12:45:12.500783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.510638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.510718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.146 [2024-07-25 12:45:12.510741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.146 [2024-07-25 12:45:12.510751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.146 [2024-07-25 12:45:12.510760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.146 [2024-07-25 12:45:12.510781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.146 qpair failed and we were unable to recover it. 00:32:39.146 [2024-07-25 12:45:12.521035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.146 [2024-07-25 12:45:12.521156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.147 [2024-07-25 12:45:12.521178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.147 [2024-07-25 12:45:12.521188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.147 [2024-07-25 12:45:12.521197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.147 [2024-07-25 12:45:12.521222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.147 qpair failed and we were unable to recover it. 00:32:39.147 [2024-07-25 12:45:12.530811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.147 [2024-07-25 12:45:12.530909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.147 [2024-07-25 12:45:12.530930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.147 [2024-07-25 12:45:12.530941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.147 [2024-07-25 12:45:12.530950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.147 [2024-07-25 12:45:12.530970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.147 qpair failed and we were unable to recover it. 00:32:39.147 [2024-07-25 12:45:12.540757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.147 [2024-07-25 12:45:12.540836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.147 [2024-07-25 12:45:12.540858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.147 [2024-07-25 12:45:12.540869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.147 [2024-07-25 12:45:12.540878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.147 [2024-07-25 12:45:12.540898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.147 qpair failed and we were unable to recover it. 00:32:39.147 [2024-07-25 12:45:12.550918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.147 [2024-07-25 12:45:12.550999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.147 [2024-07-25 12:45:12.551020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.147 [2024-07-25 12:45:12.551032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.147 [2024-07-25 12:45:12.551041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.147 [2024-07-25 12:45:12.551061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.147 qpair failed and we were unable to recover it. 00:32:39.147 [2024-07-25 12:45:12.561207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.147 [2024-07-25 12:45:12.561321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.147 [2024-07-25 12:45:12.561342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.147 [2024-07-25 12:45:12.561353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.147 [2024-07-25 12:45:12.561363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.147 [2024-07-25 12:45:12.561382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.147 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.570923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.409 [2024-07-25 12:45:12.571034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.409 [2024-07-25 12:45:12.571055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.409 [2024-07-25 12:45:12.571065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.409 [2024-07-25 12:45:12.571075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.409 [2024-07-25 12:45:12.571094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.409 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.580948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.409 [2024-07-25 12:45:12.581056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.409 [2024-07-25 12:45:12.581077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.409 [2024-07-25 12:45:12.581088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.409 [2024-07-25 12:45:12.581097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.409 [2024-07-25 12:45:12.581116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.409 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.590903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.409 [2024-07-25 12:45:12.590995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.409 [2024-07-25 12:45:12.591016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.409 [2024-07-25 12:45:12.591026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.409 [2024-07-25 12:45:12.591035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.409 [2024-07-25 12:45:12.591055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.409 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.601320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.409 [2024-07-25 12:45:12.601442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.409 [2024-07-25 12:45:12.601462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.409 [2024-07-25 12:45:12.601473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.409 [2024-07-25 12:45:12.601482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.409 [2024-07-25 12:45:12.601501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.409 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.610936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.409 [2024-07-25 12:45:12.611020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.409 [2024-07-25 12:45:12.611041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.409 [2024-07-25 12:45:12.611051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.409 [2024-07-25 12:45:12.611064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.409 [2024-07-25 12:45:12.611085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.409 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.621011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.409 [2024-07-25 12:45:12.621112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.409 [2024-07-25 12:45:12.621133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.409 [2024-07-25 12:45:12.621143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.409 [2024-07-25 12:45:12.621152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.409 [2024-07-25 12:45:12.621172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.409 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.631094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.409 [2024-07-25 12:45:12.631181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.409 [2024-07-25 12:45:12.631203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.409 [2024-07-25 12:45:12.631213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.409 [2024-07-25 12:45:12.631222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.409 [2024-07-25 12:45:12.631242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.409 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.641317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.409 [2024-07-25 12:45:12.641425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.409 [2024-07-25 12:45:12.641447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.409 [2024-07-25 12:45:12.641457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.409 [2024-07-25 12:45:12.641466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.409 [2024-07-25 12:45:12.641486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.409 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.651110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.409 [2024-07-25 12:45:12.651225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.409 [2024-07-25 12:45:12.651246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.409 [2024-07-25 12:45:12.651257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.409 [2024-07-25 12:45:12.651266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.409 [2024-07-25 12:45:12.651286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.409 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.661211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.409 [2024-07-25 12:45:12.661286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.409 [2024-07-25 12:45:12.661308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.409 [2024-07-25 12:45:12.661318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.409 [2024-07-25 12:45:12.661328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.409 [2024-07-25 12:45:12.661348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.409 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.671237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.409 [2024-07-25 12:45:12.671322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.409 [2024-07-25 12:45:12.671343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.409 [2024-07-25 12:45:12.671353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.409 [2024-07-25 12:45:12.671362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.409 [2024-07-25 12:45:12.671383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.409 qpair failed and we were unable to recover it. 00:32:39.409 [2024-07-25 12:45:12.681566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.681733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.681755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.681765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.681774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.681794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.691218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.691306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.691328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.691338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.691348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.691367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.701353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.701442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.701463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.701478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.701487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.701506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.711376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.711448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.711469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.711480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.711490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.711510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.721693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.721813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.721834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.721845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.721854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.721874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.731425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.731512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.731531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.731542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.731556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.731576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.741412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.741488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.741507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.741518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.741527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.741554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.751493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.751572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.751594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.751604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.751614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.751634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.761704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.761813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.761834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.761844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.761853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.761873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.771463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.771566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.771587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.771598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.771607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.771626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.781597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.781678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.781700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.781710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.781719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.781739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.791605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.410 [2024-07-25 12:45:12.791682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.410 [2024-07-25 12:45:12.791709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.410 [2024-07-25 12:45:12.791719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.410 [2024-07-25 12:45:12.791728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.410 [2024-07-25 12:45:12.791748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.410 qpair failed and we were unable to recover it. 00:32:39.410 [2024-07-25 12:45:12.801945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.411 [2024-07-25 12:45:12.802085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.411 [2024-07-25 12:45:12.802106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.411 [2024-07-25 12:45:12.802117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.411 [2024-07-25 12:45:12.802126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.411 [2024-07-25 12:45:12.802145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.411 qpair failed and we were unable to recover it. 00:32:39.411 [2024-07-25 12:45:12.811593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.411 [2024-07-25 12:45:12.811691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.411 [2024-07-25 12:45:12.811712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.411 [2024-07-25 12:45:12.811722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.411 [2024-07-25 12:45:12.811731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.411 [2024-07-25 12:45:12.811751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.411 qpair failed and we were unable to recover it. 00:32:39.411 [2024-07-25 12:45:12.821712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.411 [2024-07-25 12:45:12.821787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.411 [2024-07-25 12:45:12.821809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.411 [2024-07-25 12:45:12.821820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.411 [2024-07-25 12:45:12.821829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.411 [2024-07-25 12:45:12.821848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.411 qpair failed and we were unable to recover it. 00:32:39.671 [2024-07-25 12:45:12.831810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.671 [2024-07-25 12:45:12.831895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.671 [2024-07-25 12:45:12.831917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.671 [2024-07-25 12:45:12.831927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.671 [2024-07-25 12:45:12.831937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.671 [2024-07-25 12:45:12.831962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.671 qpair failed and we were unable to recover it. 00:32:39.671 [2024-07-25 12:45:12.842077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.671 [2024-07-25 12:45:12.842187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.671 [2024-07-25 12:45:12.842209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.671 [2024-07-25 12:45:12.842219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.671 [2024-07-25 12:45:12.842228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.671 [2024-07-25 12:45:12.842249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.671 qpair failed and we were unable to recover it. 00:32:39.671 [2024-07-25 12:45:12.851857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.671 [2024-07-25 12:45:12.851941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.671 [2024-07-25 12:45:12.851963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.671 [2024-07-25 12:45:12.851973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.671 [2024-07-25 12:45:12.851982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.671 [2024-07-25 12:45:12.852002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.671 qpair failed and we were unable to recover it. 00:32:39.671 [2024-07-25 12:45:12.861864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.671 [2024-07-25 12:45:12.861945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.671 [2024-07-25 12:45:12.861969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.671 [2024-07-25 12:45:12.861985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.671 [2024-07-25 12:45:12.861994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.671 [2024-07-25 12:45:12.862015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.671 qpair failed and we were unable to recover it. 00:32:39.671 [2024-07-25 12:45:12.871854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.671 [2024-07-25 12:45:12.871930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.671 [2024-07-25 12:45:12.871952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.671 [2024-07-25 12:45:12.871963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.671 [2024-07-25 12:45:12.871972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.671 [2024-07-25 12:45:12.871993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.671 qpair failed and we were unable to recover it. 00:32:39.671 [2024-07-25 12:45:12.882203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.671 [2024-07-25 12:45:12.882322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.671 [2024-07-25 12:45:12.882348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.671 [2024-07-25 12:45:12.882359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.671 [2024-07-25 12:45:12.882368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.671 [2024-07-25 12:45:12.882388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.671 qpair failed and we were unable to recover it. 00:32:39.671 [2024-07-25 12:45:12.891854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.671 [2024-07-25 12:45:12.891942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.671 [2024-07-25 12:45:12.891965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.671 [2024-07-25 12:45:12.891975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.671 [2024-07-25 12:45:12.891985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.671 [2024-07-25 12:45:12.892005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.671 qpair failed and we were unable to recover it. 00:32:39.671 [2024-07-25 12:45:12.902024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.671 [2024-07-25 12:45:12.902107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.671 [2024-07-25 12:45:12.902129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.671 [2024-07-25 12:45:12.902139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.671 [2024-07-25 12:45:12.902148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.671 [2024-07-25 12:45:12.902168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.671 qpair failed and we were unable to recover it. 00:32:39.671 [2024-07-25 12:45:12.912040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.671 [2024-07-25 12:45:12.912117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.671 [2024-07-25 12:45:12.912138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.671 [2024-07-25 12:45:12.912149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.671 [2024-07-25 12:45:12.912158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.671 [2024-07-25 12:45:12.912177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.671 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:12.922324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:12.922467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:12.922489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:12.922500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:12.922509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:12.922533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:12.932101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:12.932204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:12.932225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:12.932235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:12.932245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:12.932265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:12.942169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:12.942254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:12.942276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:12.942286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:12.942295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:12.942315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:12.952271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:12.952352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:12.952372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:12.952383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:12.952392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:12.952411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:12.962606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:12.962762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:12.962784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:12.962795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:12.962805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:12.962825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:12.972209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:12.972293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:12.972318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:12.972329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:12.972338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:12.972358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:12.982182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:12.982264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:12.982285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:12.982295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:12.982304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:12.982324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:12.992261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:12.992336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:12.992357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:12.992367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:12.992376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:12.992395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:13.002594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:13.002738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:13.002760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:13.002771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:13.002780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:13.002800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:13.012330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:13.012444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:13.012466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:13.012477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:13.012491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:13.012511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:13.022251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:13.022328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:13.022348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:13.022358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:13.022368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:13.022387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:13.032384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:13.032466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.672 [2024-07-25 12:45:13.032487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.672 [2024-07-25 12:45:13.032498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.672 [2024-07-25 12:45:13.032507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.672 [2024-07-25 12:45:13.032527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.672 qpair failed and we were unable to recover it. 00:32:39.672 [2024-07-25 12:45:13.042625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.672 [2024-07-25 12:45:13.042741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.673 [2024-07-25 12:45:13.042761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.673 [2024-07-25 12:45:13.042771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.673 [2024-07-25 12:45:13.042780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.673 [2024-07-25 12:45:13.042801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.673 qpair failed and we were unable to recover it. 00:32:39.673 [2024-07-25 12:45:13.052383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.673 [2024-07-25 12:45:13.052474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.673 [2024-07-25 12:45:13.052495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.673 [2024-07-25 12:45:13.052506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.673 [2024-07-25 12:45:13.052515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.673 [2024-07-25 12:45:13.052535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.673 qpair failed and we were unable to recover it. 00:32:39.673 [2024-07-25 12:45:13.062524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.673 [2024-07-25 12:45:13.062626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.673 [2024-07-25 12:45:13.062648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.673 [2024-07-25 12:45:13.062658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.673 [2024-07-25 12:45:13.062667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.673 [2024-07-25 12:45:13.062688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.673 qpair failed and we were unable to recover it. 00:32:39.673 [2024-07-25 12:45:13.072528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.673 [2024-07-25 12:45:13.072613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.673 [2024-07-25 12:45:13.072633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.673 [2024-07-25 12:45:13.072644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.673 [2024-07-25 12:45:13.072654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.673 [2024-07-25 12:45:13.072674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.673 qpair failed and we were unable to recover it. 00:32:39.673 [2024-07-25 12:45:13.082868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.673 [2024-07-25 12:45:13.082978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.673 [2024-07-25 12:45:13.082999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.673 [2024-07-25 12:45:13.083010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.673 [2024-07-25 12:45:13.083019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.673 [2024-07-25 12:45:13.083038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.673 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.092600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.092682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.934 [2024-07-25 12:45:13.092703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.934 [2024-07-25 12:45:13.092713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.934 [2024-07-25 12:45:13.092723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.934 [2024-07-25 12:45:13.092742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.934 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.102622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.102693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.934 [2024-07-25 12:45:13.102713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.934 [2024-07-25 12:45:13.102728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.934 [2024-07-25 12:45:13.102738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.934 [2024-07-25 12:45:13.102757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.934 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.112636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.112748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.934 [2024-07-25 12:45:13.112770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.934 [2024-07-25 12:45:13.112781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.934 [2024-07-25 12:45:13.112790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.934 [2024-07-25 12:45:13.112810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.934 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.122972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.123089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.934 [2024-07-25 12:45:13.123110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.934 [2024-07-25 12:45:13.123120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.934 [2024-07-25 12:45:13.123130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.934 [2024-07-25 12:45:13.123150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.934 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.132715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.132813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.934 [2024-07-25 12:45:13.132834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.934 [2024-07-25 12:45:13.132846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.934 [2024-07-25 12:45:13.132855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.934 [2024-07-25 12:45:13.132874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.934 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.142774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.142855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.934 [2024-07-25 12:45:13.142876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.934 [2024-07-25 12:45:13.142886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.934 [2024-07-25 12:45:13.142896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.934 [2024-07-25 12:45:13.142916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.934 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.152804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.152912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.934 [2024-07-25 12:45:13.152934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.934 [2024-07-25 12:45:13.152946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.934 [2024-07-25 12:45:13.152955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.934 [2024-07-25 12:45:13.152976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.934 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.163120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.163262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.934 [2024-07-25 12:45:13.163283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.934 [2024-07-25 12:45:13.163294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.934 [2024-07-25 12:45:13.163303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.934 [2024-07-25 12:45:13.163324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.934 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.172782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.172902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.934 [2024-07-25 12:45:13.172923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.934 [2024-07-25 12:45:13.172934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.934 [2024-07-25 12:45:13.172943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.934 [2024-07-25 12:45:13.172963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.934 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.182872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.182947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.934 [2024-07-25 12:45:13.182967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.934 [2024-07-25 12:45:13.182977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.934 [2024-07-25 12:45:13.182987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.934 [2024-07-25 12:45:13.183006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.934 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.192915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.192992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.934 [2024-07-25 12:45:13.193012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.934 [2024-07-25 12:45:13.193032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.934 [2024-07-25 12:45:13.193041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.934 [2024-07-25 12:45:13.193061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.934 qpair failed and we were unable to recover it. 00:32:39.934 [2024-07-25 12:45:13.203230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.934 [2024-07-25 12:45:13.203348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.935 [2024-07-25 12:45:13.203369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.935 [2024-07-25 12:45:13.203380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.935 [2024-07-25 12:45:13.203389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.935 [2024-07-25 12:45:13.203408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.935 qpair failed and we were unable to recover it. 00:32:39.935 [2024-07-25 12:45:13.212981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.935 [2024-07-25 12:45:13.213068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.935 [2024-07-25 12:45:13.213089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.935 [2024-07-25 12:45:13.213102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.935 [2024-07-25 12:45:13.213112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.935 [2024-07-25 12:45:13.213132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.935 qpair failed and we were unable to recover it. 00:32:39.935 [2024-07-25 12:45:13.222911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.935 [2024-07-25 12:45:13.222991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.935 [2024-07-25 12:45:13.223012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.935 [2024-07-25 12:45:13.223022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.935 [2024-07-25 12:45:13.223031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.935 [2024-07-25 12:45:13.223057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.935 qpair failed and we were unable to recover it. 00:32:39.935 [2024-07-25 12:45:13.233038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.935 [2024-07-25 12:45:13.233114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.935 [2024-07-25 12:45:13.233136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.935 [2024-07-25 12:45:13.233146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.935 [2024-07-25 12:45:13.233156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.935 [2024-07-25 12:45:13.233177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.935 qpair failed and we were unable to recover it. 00:32:39.935 [2024-07-25 12:45:13.243368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.935 [2024-07-25 12:45:13.243484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.935 [2024-07-25 12:45:13.243505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.935 [2024-07-25 12:45:13.243516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.935 [2024-07-25 12:45:13.243524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf54000b90 00:32:39.935 [2024-07-25 12:45:13.243544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:39.935 qpair failed and we were unable to recover it. 00:32:39.935 [2024-07-25 12:45:13.253131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.935 [2024-07-25 12:45:13.253311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.935 [2024-07-25 12:45:13.253377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.935 [2024-07-25 12:45:13.253404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.935 [2024-07-25 12:45:13.253423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf44000b90 00:32:39.935 [2024-07-25 12:45:13.253476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:39.935 qpair failed and we were unable to recover it. 00:32:39.935 [2024-07-25 12:45:13.263137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.935 [2024-07-25 12:45:13.263270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.935 [2024-07-25 12:45:13.263316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.935 [2024-07-25 12:45:13.263337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.935 [2024-07-25 12:45:13.263356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf44000b90 00:32:39.935 [2024-07-25 12:45:13.263400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:39.935 qpair failed and we were unable to recover it. 00:32:39.935 [2024-07-25 12:45:13.273196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.935 [2024-07-25 12:45:13.273336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.935 [2024-07-25 12:45:13.273378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.935 [2024-07-25 12:45:13.273399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.935 [2024-07-25 12:45:13.273417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdf44000b90 00:32:39.935 [2024-07-25 12:45:13.273459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:39.935 qpair failed and we were unable to recover it. 00:32:39.935 [2024-07-25 12:45:13.273605] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:32:39.935 A controller has encountered a failure and is being reset. 00:32:39.935 [2024-07-25 12:45:13.273712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd65bd0 (9): Bad file descriptor 00:32:39.935 Controller properly reset. 00:32:40.195 Initializing NVMe Controllers 00:32:40.195 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:40.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:40.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:40.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:40.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:40.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:40.195 Initialization complete. Launching workers. 00:32:40.195 Starting thread on core 1 00:32:40.195 Starting thread on core 2 00:32:40.195 Starting thread on core 3 00:32:40.195 Starting thread on core 0 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:40.195 00:32:40.195 real 0m11.517s 00:32:40.195 user 0m21.094s 00:32:40.195 sys 0m4.014s 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:40.195 ************************************ 00:32:40.195 END TEST nvmf_target_disconnect_tc2 00:32:40.195 ************************************ 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:40.195 rmmod nvme_tcp 00:32:40.195 rmmod nvme_fabrics 00:32:40.195 rmmod nvme_keyring 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 620157 ']' 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 620157 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 620157 ']' 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 620157 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 620157 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 620157' 00:32:40.195 killing process with pid 620157 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 620157 00:32:40.195 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 620157 00:32:40.456 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:40.456 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:40.456 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:40.456 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:40.456 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:40.456 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.456 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.456 12:45:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.043 12:45:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:43.043 00:32:43.043 real 0m22.894s 00:32:43.043 user 0m48.814s 00:32:43.043 sys 0m10.965s 00:32:43.043 12:45:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:43.043 12:45:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:43.043 ************************************ 00:32:43.043 END TEST nvmf_target_disconnect 00:32:43.043 ************************************ 00:32:43.043 12:45:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:43.043 12:45:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:43.043 00:32:43.043 real 6m56.929s 00:32:43.043 user 11m50.481s 00:32:43.043 sys 2m25.017s 00:32:43.043 12:45:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:43.043 12:45:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.043 ************************************ 00:32:43.043 END TEST nvmf_host 00:32:43.043 ************************************ 00:32:43.043 12:45:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:43.043 00:32:43.043 real 24m48.457s 00:32:43.043 user 49m52.493s 00:32:43.043 sys 8m1.554s 00:32:43.043 12:45:15 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:43.043 12:45:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.043 ************************************ 00:32:43.043 END TEST nvmf_tcp 00:32:43.043 ************************************ 00:32:43.043 12:45:16 -- common/autotest_common.sh@1142 -- # return 0 00:32:43.043 12:45:16 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:32:43.043 12:45:16 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:43.043 12:45:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:43.043 12:45:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:43.043 12:45:16 -- common/autotest_common.sh@10 -- # set +x 00:32:43.043 ************************************ 00:32:43.043 START TEST spdkcli_nvmf_tcp 00:32:43.043 ************************************ 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:43.043 * Looking for test storage... 00:32:43.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:43.043 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=622276 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 622276 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 622276 ']' 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:43.044 12:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.044 [2024-07-25 12:45:16.277931] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:32:43.044 [2024-07-25 12:45:16.277998] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622276 ] 00:32:43.044 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.044 [2024-07-25 12:45:16.364722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:43.044 [2024-07-25 12:45:16.459496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.044 [2024-07-25 12:45:16.459502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.988 12:45:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:43.988 12:45:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:32:43.988 12:45:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:43.988 12:45:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:43.988 12:45:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.988 12:45:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:43.988 12:45:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:43.988 12:45:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:43.988 12:45:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:43.988 12:45:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.988 12:45:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:43.988 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:43.988 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:43.988 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:43.988 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:43.988 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:43.988 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:43.988 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:43.988 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:43.988 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:43.988 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:43.988 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:43.988 ' 00:32:46.534 [2024-07-25 12:45:19.895024] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.917 [2024-07-25 12:45:21.219740] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:50.458 [2024-07-25 12:45:23.687037] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:52.997 [2024-07-25 12:45:25.845686] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:54.377 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:54.377 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:54.377 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:54.377 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:54.377 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:54.377 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:54.377 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:54.377 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:54.377 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:54.377 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:54.378 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:54.378 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:54.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:54.378 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:54.378 12:45:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:54.378 12:45:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:54.378 12:45:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:54.378 12:45:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:54.378 12:45:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:54.378 12:45:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:54.378 12:45:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:54.378 12:45:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:54.638 12:45:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:54.899 12:45:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:54.899 12:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:54.899 12:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:54.899 12:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:54.899 12:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:54.899 12:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:54.899 12:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:54.899 12:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:54.899 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:54.899 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:54.899 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:54.899 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:54.899 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:54.899 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:54.899 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:54.899 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:54.899 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:54.899 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:54.899 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:54.899 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:54.899 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:54.899 ' 00:33:00.175 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:00.175 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:00.175 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:00.175 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:00.175 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:00.175 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:00.175 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:00.175 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:00.175 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:00.175 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:00.175 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:00.175 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:00.175 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:00.175 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 622276 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 622276 ']' 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 622276 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 622276 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 622276' 00:33:00.434 killing process with pid 622276 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 622276 00:33:00.434 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 622276 00:33:00.694 12:45:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:00.694 12:45:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:00.694 12:45:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 622276 ']' 00:33:00.694 12:45:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 622276 00:33:00.694 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 622276 ']' 00:33:00.694 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 622276 00:33:00.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (622276) - No such process 00:33:00.695 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 622276 is not found' 00:33:00.695 Process with pid 622276 is not found 00:33:00.695 12:45:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:00.695 12:45:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:00.695 12:45:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:00.695 00:33:00.695 real 0m17.795s 00:33:00.695 user 0m39.349s 00:33:00.695 sys 0m1.063s 00:33:00.695 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:00.695 12:45:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:00.695 ************************************ 00:33:00.695 END TEST spdkcli_nvmf_tcp 00:33:00.695 ************************************ 00:33:00.695 12:45:33 -- common/autotest_common.sh@1142 -- # return 0 00:33:00.695 12:45:33 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:00.695 12:45:33 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:00.695 12:45:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:00.695 12:45:33 -- common/autotest_common.sh@10 -- # set +x 00:33:00.695 ************************************ 00:33:00.695 START TEST nvmf_identify_passthru 00:33:00.695 ************************************ 00:33:00.695 12:45:33 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:00.695 * Looking for test storage... 00:33:00.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:00.695 12:45:34 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.695 12:45:34 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.695 12:45:34 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.695 12:45:34 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.695 12:45:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.695 12:45:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.695 12:45:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.695 12:45:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:00.695 12:45:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:00.695 12:45:34 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.695 12:45:34 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.695 12:45:34 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.695 12:45:34 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.695 12:45:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.695 12:45:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.695 12:45:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.695 12:45:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:00.695 12:45:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.695 12:45:34 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.695 12:45:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:00.695 12:45:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:00.695 12:45:34 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:33:00.695 12:45:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:08.832 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.832 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:08.833 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:08.833 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:08.833 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:08.833 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.833 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.095 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.095 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.095 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:09.095 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:09.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:33:09.357 00:33:09.357 --- 10.0.0.2 ping statistics --- 00:33:09.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.357 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:09.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:33:09.357 00:33:09.357 --- 10.0.0.1 ping statistics --- 00:33:09.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.357 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:09.357 12:45:42 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:09.357 12:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:09.357 12:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:33:09.357 12:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:33:09.357 12:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:33:09.357 12:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:33:09.357 12:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:09.357 12:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:09.357 12:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:09.357 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.667 12:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ9512038S2P0BGN 00:33:14.667 12:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:14.667 12:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:14.667 12:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:14.667 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.955 12:45:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:19.955 12:45:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:19.955 12:45:52 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:19.955 12:45:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:19.955 12:45:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:19.955 12:45:52 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:19.955 12:45:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:19.955 12:45:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=630794 00:33:19.955 12:45:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:19.955 12:45:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:19.955 12:45:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 630794 00:33:19.955 12:45:52 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 630794 ']' 00:33:19.955 12:45:52 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.955 12:45:52 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:19.955 12:45:52 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.955 12:45:52 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:19.955 12:45:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:19.955 [2024-07-25 12:45:52.986998] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:33:19.955 [2024-07-25 12:45:52.987058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.955 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.955 [2024-07-25 12:45:53.077475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:19.955 [2024-07-25 12:45:53.142980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.955 [2024-07-25 12:45:53.143015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.955 [2024-07-25 12:45:53.143022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.955 [2024-07-25 12:45:53.143030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.955 [2024-07-25 12:45:53.143035] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.955 [2024-07-25 12:45:53.143144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.955 [2024-07-25 12:45:53.143288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:19.955 [2024-07-25 12:45:53.143429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.955 [2024-07-25 12:45:53.143431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:20.524 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:20.524 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:33:20.524 12:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:20.524 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.524 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:20.524 INFO: Log level set to 20 00:33:20.524 INFO: Requests: 00:33:20.524 { 00:33:20.524 "jsonrpc": "2.0", 00:33:20.524 "method": "nvmf_set_config", 00:33:20.524 "id": 1, 00:33:20.524 "params": { 00:33:20.524 "admin_cmd_passthru": { 00:33:20.524 "identify_ctrlr": true 00:33:20.524 } 00:33:20.524 } 00:33:20.524 } 00:33:20.524 00:33:20.524 INFO: response: 00:33:20.524 { 00:33:20.524 "jsonrpc": "2.0", 00:33:20.524 "id": 1, 00:33:20.524 "result": true 00:33:20.524 } 00:33:20.524 00:33:20.524 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.524 12:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:20.524 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.524 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:20.524 INFO: Setting log level to 20 00:33:20.524 INFO: Setting log level to 20 00:33:20.524 INFO: Log level set to 20 00:33:20.524 INFO: Log level set to 20 00:33:20.524 INFO: Requests: 00:33:20.524 { 00:33:20.524 "jsonrpc": "2.0", 00:33:20.524 "method": "framework_start_init", 00:33:20.524 "id": 1 00:33:20.524 } 00:33:20.524 00:33:20.524 INFO: Requests: 00:33:20.524 { 00:33:20.524 "jsonrpc": "2.0", 00:33:20.524 "method": "framework_start_init", 00:33:20.524 "id": 1 00:33:20.524 } 00:33:20.524 00:33:20.524 [2024-07-25 12:45:53.921955] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:20.524 INFO: response: 00:33:20.524 { 00:33:20.524 "jsonrpc": "2.0", 00:33:20.524 "id": 1, 00:33:20.524 "result": true 00:33:20.524 } 00:33:20.524 00:33:20.524 INFO: response: 00:33:20.524 { 00:33:20.524 "jsonrpc": "2.0", 00:33:20.524 "id": 1, 00:33:20.524 "result": true 00:33:20.524 } 00:33:20.524 00:33:20.524 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.524 12:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:20.524 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.524 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:20.524 INFO: Setting log level to 40 00:33:20.524 INFO: Setting log level to 40 00:33:20.524 INFO: Setting log level to 40 00:33:20.524 [2024-07-25 12:45:53.935205] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.785 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.785 12:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:20.785 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:20.785 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:20.785 12:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:33:20.785 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.785 12:45:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.083 Nvme0n1 00:33:24.083 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.083 12:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:24.083 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.083 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.083 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.083 12:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:24.083 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.083 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.083 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.084 12:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.084 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.084 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.084 [2024-07-25 12:45:56.856220] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.084 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.084 12:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:24.084 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.084 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.084 [ 00:33:24.084 { 00:33:24.084 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:24.084 "subtype": "Discovery", 00:33:24.084 "listen_addresses": [], 00:33:24.084 "allow_any_host": true, 00:33:24.084 "hosts": [] 00:33:24.084 }, 00:33:24.084 { 00:33:24.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:24.084 "subtype": "NVMe", 00:33:24.084 "listen_addresses": [ 00:33:24.084 { 00:33:24.084 "trtype": "TCP", 00:33:24.084 "adrfam": "IPv4", 00:33:24.084 "traddr": "10.0.0.2", 00:33:24.084 "trsvcid": "4420" 00:33:24.084 } 00:33:24.084 ], 00:33:24.084 "allow_any_host": true, 00:33:24.084 "hosts": [], 00:33:24.084 "serial_number": "SPDK00000000000001", 00:33:24.084 "model_number": "SPDK bdev Controller", 00:33:24.084 "max_namespaces": 1, 00:33:24.084 "min_cntlid": 1, 00:33:24.084 "max_cntlid": 65519, 00:33:24.084 "namespaces": [ 00:33:24.084 { 00:33:24.084 "nsid": 1, 00:33:24.084 "bdev_name": "Nvme0n1", 00:33:24.084 "name": "Nvme0n1", 00:33:24.084 "nguid": "A9CD33FB628C488280769B9A50C630E9", 00:33:24.084 "uuid": "a9cd33fb-628c-4882-8076-9b9a50c630e9" 00:33:24.084 } 00:33:24.084 ] 00:33:24.084 } 00:33:24.084 ] 00:33:24.084 12:45:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.084 12:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:24.084 12:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:24.084 12:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:24.084 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.084 12:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ9512038S2P0BGN 00:33:24.084 12:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:24.084 12:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:24.084 12:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:24.084 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.084 12:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:24.084 12:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ9512038S2P0BGN '!=' PHLJ9512038S2P0BGN ']' 00:33:24.084 12:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:24.084 12:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.084 12:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:24.084 12:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:24.084 12:45:57 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:24.084 12:45:57 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:33:24.084 12:45:57 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:24.084 12:45:57 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:33:24.084 12:45:57 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:24.084 12:45:57 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:24.084 rmmod nvme_tcp 00:33:24.084 rmmod nvme_fabrics 00:33:24.084 rmmod nvme_keyring 00:33:24.084 12:45:57 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:24.084 12:45:57 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:33:24.084 12:45:57 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:33:24.084 12:45:57 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 630794 ']' 00:33:24.084 12:45:57 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 630794 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 630794 ']' 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 630794 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 630794 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 630794' 00:33:24.084 killing process with pid 630794 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 630794 00:33:24.084 12:45:57 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 630794 00:33:26.626 12:45:59 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:26.626 12:45:59 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:26.627 12:45:59 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:26.627 12:45:59 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:26.627 12:45:59 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:26.627 12:45:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.627 12:45:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:26.627 12:45:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.530 12:46:01 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:28.530 00:33:28.530 real 0m27.859s 00:33:28.530 user 0m36.458s 00:33:28.530 sys 0m7.217s 00:33:28.530 12:46:01 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:28.530 12:46:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:28.530 ************************************ 00:33:28.530 END TEST nvmf_identify_passthru 00:33:28.530 ************************************ 00:33:28.530 12:46:01 -- common/autotest_common.sh@1142 -- # return 0 00:33:28.530 12:46:01 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:28.530 12:46:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:28.530 12:46:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:28.530 12:46:01 -- common/autotest_common.sh@10 -- # set +x 00:33:28.530 ************************************ 00:33:28.530 START TEST nvmf_dif 00:33:28.530 ************************************ 00:33:28.530 12:46:01 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:28.789 * Looking for test storage... 00:33:28.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:28.790 12:46:01 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.790 12:46:01 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.790 12:46:02 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.790 12:46:02 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.790 12:46:02 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.790 12:46:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.790 12:46:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.790 12:46:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.790 12:46:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:28.790 12:46:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:28.790 12:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:28.790 12:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:28.790 12:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:28.790 12:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:28.790 12:46:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.790 12:46:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:28.790 12:46:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:28.790 12:46:02 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:33:28.790 12:46:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:36.934 12:46:10 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:36.935 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:36.935 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:36.935 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:36.935 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:36.935 12:46:10 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:37.246 12:46:10 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:37.246 12:46:10 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:37.246 12:46:10 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:37.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:37.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:33:37.246 00:33:37.246 --- 10.0.0.2 ping statistics --- 00:33:37.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.246 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:33:37.246 12:46:10 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:37.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:37.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:33:37.246 00:33:37.246 --- 10.0.0.1 ping statistics --- 00:33:37.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.246 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:33:37.246 12:46:10 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:37.246 12:46:10 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:33:37.246 12:46:10 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:37.246 12:46:10 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:41.455 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:41.455 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:41.455 12:46:14 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.455 12:46:14 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:41.455 12:46:14 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:41.455 12:46:14 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.455 12:46:14 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:41.455 12:46:14 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:41.455 12:46:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:41.455 12:46:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:41.455 12:46:14 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:41.455 12:46:14 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:41.455 12:46:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:41.455 12:46:14 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=637783 00:33:41.455 12:46:14 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 637783 00:33:41.455 12:46:14 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:41.455 12:46:14 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 637783 ']' 00:33:41.455 12:46:14 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.455 12:46:14 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:41.455 12:46:14 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.455 12:46:14 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:41.455 12:46:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:41.455 [2024-07-25 12:46:14.471490] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:33:41.455 [2024-07-25 12:46:14.471559] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:41.455 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.455 [2024-07-25 12:46:14.563594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.455 [2024-07-25 12:46:14.655242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:41.455 [2024-07-25 12:46:14.655298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:41.455 [2024-07-25 12:46:14.655307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:41.455 [2024-07-25 12:46:14.655313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:41.455 [2024-07-25 12:46:14.655319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:41.455 [2024-07-25 12:46:14.655352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.027 12:46:15 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:42.027 12:46:15 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:33:42.027 12:46:15 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:42.027 12:46:15 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:42.027 12:46:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:42.027 12:46:15 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.027 12:46:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:42.027 12:46:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:42.027 12:46:15 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.027 12:46:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:42.027 [2024-07-25 12:46:15.401527] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.028 12:46:15 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.028 12:46:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:42.028 12:46:15 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:42.028 12:46:15 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:42.028 12:46:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:42.289 ************************************ 00:33:42.289 START TEST fio_dif_1_default 00:33:42.289 ************************************ 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:42.289 bdev_null0 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:42.289 [2024-07-25 12:46:15.493952] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:42.289 { 00:33:42.289 "params": { 00:33:42.289 "name": "Nvme$subsystem", 00:33:42.289 "trtype": "$TEST_TRANSPORT", 00:33:42.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:42.289 "adrfam": "ipv4", 00:33:42.289 "trsvcid": "$NVMF_PORT", 00:33:42.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:42.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:42.289 "hdgst": ${hdgst:-false}, 00:33:42.289 "ddgst": ${ddgst:-false} 00:33:42.289 }, 00:33:42.289 "method": "bdev_nvme_attach_controller" 00:33:42.289 } 00:33:42.289 EOF 00:33:42.289 )") 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:42.289 "params": { 00:33:42.289 "name": "Nvme0", 00:33:42.289 "trtype": "tcp", 00:33:42.289 "traddr": "10.0.0.2", 00:33:42.289 "adrfam": "ipv4", 00:33:42.289 "trsvcid": "4420", 00:33:42.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:42.289 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:42.289 "hdgst": false, 00:33:42.289 "ddgst": false 00:33:42.289 }, 00:33:42.289 "method": "bdev_nvme_attach_controller" 00:33:42.289 }' 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:42.289 12:46:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:42.550 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:42.550 fio-3.35 00:33:42.550 Starting 1 thread 00:33:42.550 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.780 00:33:54.780 filename0: (groupid=0, jobs=1): err= 0: pid=638264: Thu Jul 25 12:46:26 2024 00:33:54.780 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10025msec) 00:33:54.780 slat (nsec): min=5530, max=88813, avg=6653.11, stdev=3094.55 00:33:54.780 clat (usec): min=875, max=43852, avg=40899.61, stdev=2582.63 00:33:54.780 lat (usec): min=882, max=43895, avg=40906.26, stdev=2582.70 00:33:54.780 clat percentiles (usec): 00:33:54.780 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:54.780 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:54.780 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:33:54.780 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:33:54.780 | 99.99th=[43779] 00:33:54.780 bw ( KiB/s): min= 384, max= 416, per=99.74%, avg=390.40, stdev=13.13, samples=20 00:33:54.780 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:33:54.780 lat (usec) : 1000=0.41% 00:33:54.780 lat (msec) : 50=99.59% 00:33:54.780 cpu : usr=95.52%, sys=4.23%, ctx=19, majf=0, minf=281 00:33:54.780 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.780 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.780 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:54.780 00:33:54.780 Run status group 0 (all jobs): 00:33:54.780 READ: bw=391KiB/s (400kB/s), 391KiB/s-391KiB/s (400kB/s-400kB/s), io=3920KiB (4014kB), run=10025-10025msec 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.780 00:33:54.780 real 0m11.110s 00:33:54.780 user 0m16.829s 00:33:54.780 sys 0m0.785s 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 ************************************ 00:33:54.780 END TEST fio_dif_1_default 00:33:54.780 ************************************ 00:33:54.780 12:46:26 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:33:54.780 12:46:26 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:54.780 12:46:26 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:54.780 12:46:26 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 ************************************ 00:33:54.780 START TEST fio_dif_1_multi_subsystems 00:33:54.780 ************************************ 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 bdev_null0 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 [2024-07-25 12:46:26.681120] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 bdev_null1 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:54.780 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:54.780 { 00:33:54.780 "params": { 00:33:54.780 "name": "Nvme$subsystem", 00:33:54.780 "trtype": "$TEST_TRANSPORT", 00:33:54.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.780 "adrfam": "ipv4", 00:33:54.780 "trsvcid": "$NVMF_PORT", 00:33:54.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.780 "hdgst": ${hdgst:-false}, 00:33:54.780 "ddgst": ${ddgst:-false} 00:33:54.780 }, 00:33:54.780 "method": "bdev_nvme_attach_controller" 00:33:54.780 } 00:33:54.780 EOF 00:33:54.780 )") 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:54.781 { 00:33:54.781 "params": { 00:33:54.781 "name": "Nvme$subsystem", 00:33:54.781 "trtype": "$TEST_TRANSPORT", 00:33:54.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.781 "adrfam": "ipv4", 00:33:54.781 "trsvcid": "$NVMF_PORT", 00:33:54.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.781 "hdgst": ${hdgst:-false}, 00:33:54.781 "ddgst": ${ddgst:-false} 00:33:54.781 }, 00:33:54.781 "method": "bdev_nvme_attach_controller" 00:33:54.781 } 00:33:54.781 EOF 00:33:54.781 )") 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:54.781 "params": { 00:33:54.781 "name": "Nvme0", 00:33:54.781 "trtype": "tcp", 00:33:54.781 "traddr": "10.0.0.2", 00:33:54.781 "adrfam": "ipv4", 00:33:54.781 "trsvcid": "4420", 00:33:54.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:54.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:54.781 "hdgst": false, 00:33:54.781 "ddgst": false 00:33:54.781 }, 00:33:54.781 "method": "bdev_nvme_attach_controller" 00:33:54.781 },{ 00:33:54.781 "params": { 00:33:54.781 "name": "Nvme1", 00:33:54.781 "trtype": "tcp", 00:33:54.781 "traddr": "10.0.0.2", 00:33:54.781 "adrfam": "ipv4", 00:33:54.781 "trsvcid": "4420", 00:33:54.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.781 "hdgst": false, 00:33:54.781 "ddgst": false 00:33:54.781 }, 00:33:54.781 "method": "bdev_nvme_attach_controller" 00:33:54.781 }' 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:54.781 12:46:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.781 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:54.781 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:54.781 fio-3.35 00:33:54.781 Starting 2 threads 00:33:54.781 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.778 00:34:04.778 filename0: (groupid=0, jobs=1): err= 0: pid=640257: Thu Jul 25 12:46:37 2024 00:34:04.778 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10034msec) 00:34:04.778 slat (nsec): min=7234, max=37774, avg=7780.30, stdev=1834.66 00:34:04.778 clat (usec): min=40776, max=43005, avg=41097.72, stdev=384.11 00:34:04.778 lat (usec): min=40783, max=43013, avg=41105.50, stdev=384.36 00:34:04.778 clat percentiles (usec): 00:34:04.778 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:04.778 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:04.778 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:34:04.778 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:34:04.778 | 99.99th=[43254] 00:34:04.778 bw ( KiB/s): min= 384, max= 416, per=33.84%, avg=388.80, stdev=11.72, samples=20 00:34:04.778 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:04.778 lat (msec) : 50=100.00% 00:34:04.778 cpu : usr=96.64%, sys=3.11%, ctx=14, majf=0, minf=133 00:34:04.778 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.778 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.778 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:04.778 filename1: (groupid=0, jobs=1): err= 0: pid=640258: Thu Jul 25 12:46:37 2024 00:34:04.778 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10004msec) 00:34:04.778 slat (nsec): min=7223, max=65421, avg=7626.44, stdev=1835.19 00:34:04.778 clat (usec): min=536, max=42179, avg=21038.23, stdev=20136.08 00:34:04.778 lat (usec): min=543, max=42217, avg=21045.85, stdev=20135.93 00:34:04.778 clat percentiles (usec): 00:34:04.778 | 1.00th=[ 603], 5.00th=[ 816], 10.00th=[ 848], 20.00th=[ 873], 00:34:04.778 | 30.00th=[ 889], 40.00th=[ 906], 50.00th=[40633], 60.00th=[41157], 00:34:04.778 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:04.778 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:04.778 | 99.99th=[42206] 00:34:04.778 bw ( KiB/s): min= 704, max= 768, per=66.38%, avg=761.26, stdev=20.18, samples=19 00:34:04.778 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:34:04.778 lat (usec) : 750=3.37%, 1000=46.05% 00:34:04.778 lat (msec) : 2=0.47%, 50=50.11% 00:34:04.778 cpu : usr=97.11%, sys=2.64%, ctx=13, majf=0, minf=165 00:34:04.778 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.778 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.778 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:04.778 00:34:04.778 Run status group 0 (all jobs): 00:34:04.778 READ: bw=1147KiB/s (1174kB/s), 389KiB/s-760KiB/s (398kB/s-778kB/s), io=11.2MiB (11.8MB), run=10004-10034msec 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.778 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.779 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.779 00:34:04.779 real 0m11.434s 00:34:04.779 user 0m30.610s 00:34:04.779 sys 0m0.893s 00:34:04.779 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:04.779 12:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.779 ************************************ 00:34:04.779 END TEST fio_dif_1_multi_subsystems 00:34:04.779 ************************************ 00:34:04.779 12:46:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:04.779 12:46:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:04.779 12:46:38 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:04.779 12:46:38 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.779 12:46:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:04.779 ************************************ 00:34:04.779 START TEST fio_dif_rand_params 00:34:04.779 ************************************ 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:04.779 bdev_null0 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.779 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:04.779 [2024-07-25 12:46:38.194564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.039 { 00:34:05.039 "params": { 00:34:05.039 "name": "Nvme$subsystem", 00:34:05.039 "trtype": "$TEST_TRANSPORT", 00:34:05.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.039 "adrfam": "ipv4", 00:34:05.039 "trsvcid": "$NVMF_PORT", 00:34:05.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.039 "hdgst": ${hdgst:-false}, 00:34:05.039 "ddgst": ${ddgst:-false} 00:34:05.039 }, 00:34:05.039 "method": "bdev_nvme_attach_controller" 00:34:05.039 } 00:34:05.039 EOF 00:34:05.039 )") 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:05.039 "params": { 00:34:05.039 "name": "Nvme0", 00:34:05.039 "trtype": "tcp", 00:34:05.039 "traddr": "10.0.0.2", 00:34:05.039 "adrfam": "ipv4", 00:34:05.039 "trsvcid": "4420", 00:34:05.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:05.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:05.039 "hdgst": false, 00:34:05.039 "ddgst": false 00:34:05.039 }, 00:34:05.039 "method": "bdev_nvme_attach_controller" 00:34:05.039 }' 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:05.039 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:05.040 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:05.040 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:05.040 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:05.040 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:05.040 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:05.040 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:05.040 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:05.040 12:46:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:05.299 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:05.299 ... 00:34:05.299 fio-3.35 00:34:05.299 Starting 3 threads 00:34:05.299 EAL: No free 2048 kB hugepages reported on node 1 00:34:11.883 00:34:11.883 filename0: (groupid=0, jobs=1): err= 0: pid=642260: Thu Jul 25 12:46:44 2024 00:34:11.883 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(159MiB/5047msec) 00:34:11.883 slat (nsec): min=7309, max=43928, avg=8458.90, stdev=2316.63 00:34:11.883 clat (usec): min=5277, max=53740, avg=11829.68, stdev=6788.90 00:34:11.883 lat (usec): min=5288, max=53747, avg=11838.13, stdev=6788.88 00:34:11.883 clat percentiles (usec): 00:34:11.883 | 1.00th=[ 6521], 5.00th=[ 7046], 10.00th=[ 7570], 20.00th=[ 8455], 00:34:11.883 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10814], 60.00th=[11731], 00:34:11.883 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14091], 95.00th=[15270], 00:34:11.883 | 99.00th=[50070], 99.50th=[51643], 99.90th=[53216], 99.95th=[53740], 00:34:11.883 | 99.99th=[53740] 00:34:11.883 bw ( KiB/s): min=27648, max=37888, per=33.31%, avg=32588.80, stdev=2975.80, samples=10 00:34:11.883 iops : min= 216, max= 296, avg=254.60, stdev=23.25, samples=10 00:34:11.883 lat (msec) : 10=39.84%, 20=57.41%, 50=1.65%, 100=1.10% 00:34:11.883 cpu : usr=90.65%, sys=6.34%, ctx=405, majf=0, minf=99 00:34:11.883 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.883 issued rwts: total=1275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.883 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:11.883 filename0: (groupid=0, jobs=1): err= 0: pid=642261: Thu Jul 25 12:46:44 2024 00:34:11.883 read: IOPS=251, BW=31.5MiB/s (33.0MB/s)(159MiB/5044msec) 00:34:11.883 slat (nsec): min=7254, max=31880, avg=8024.90, stdev=1293.09 00:34:11.883 clat (usec): min=5511, max=92090, avg=11899.86, stdev=11334.74 00:34:11.883 lat (usec): min=5518, max=92098, avg=11907.89, stdev=11334.76 00:34:11.883 clat percentiles (usec): 00:34:11.883 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 7373], 20.00th=[ 8029], 00:34:11.883 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:34:11.883 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[49546], 00:34:11.883 | 99.00th=[51643], 99.50th=[52167], 99.90th=[90702], 99.95th=[91751], 00:34:11.883 | 99.99th=[91751] 00:34:11.883 bw ( KiB/s): min=18176, max=42752, per=33.16%, avg=32441.60, stdev=8450.72, samples=10 00:34:11.883 iops : min= 142, max= 334, avg=253.40, stdev=66.02, samples=10 00:34:11.883 lat (msec) : 10=81.10%, 20=11.57%, 50=3.07%, 100=4.25% 00:34:11.883 cpu : usr=96.99%, sys=2.78%, ctx=9, majf=0, minf=97 00:34:11.883 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.883 issued rwts: total=1270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.883 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:11.883 filename0: (groupid=0, jobs=1): err= 0: pid=642262: Thu Jul 25 12:46:44 2024 00:34:11.883 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(164MiB/5044msec) 00:34:11.883 slat (nsec): min=7242, max=45682, avg=8025.03, stdev=1339.55 00:34:11.883 clat (usec): min=5746, max=52076, avg=11483.46, stdev=4273.04 00:34:11.883 lat (usec): min=5753, max=52084, avg=11491.49, stdev=4273.31 00:34:11.883 clat percentiles (usec): 00:34:11.883 | 1.00th=[ 6521], 5.00th=[ 7111], 10.00th=[ 7767], 20.00th=[ 8979], 00:34:11.883 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11338], 60.00th=[12256], 00:34:11.883 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14091], 95.00th=[14877], 00:34:11.883 | 99.00th=[16057], 99.50th=[50070], 99.90th=[51643], 99.95th=[52167], 00:34:11.883 | 99.99th=[52167] 00:34:11.883 bw ( KiB/s): min=27904, max=39680, per=34.29%, avg=33555.40, stdev=3711.60, samples=10 00:34:11.883 iops : min= 218, max= 310, avg=262.10, stdev=29.04, samples=10 00:34:11.883 lat (msec) : 10=35.42%, 20=63.75%, 50=0.23%, 100=0.61% 00:34:11.883 cpu : usr=96.03%, sys=3.75%, ctx=13, majf=0, minf=127 00:34:11.883 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.883 issued rwts: total=1313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.883 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:11.883 00:34:11.883 Run status group 0 (all jobs): 00:34:11.883 READ: bw=95.6MiB/s (100MB/s), 31.5MiB/s-32.5MiB/s (33.0MB/s-34.1MB/s), io=482MiB (506MB), run=5044-5047msec 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.883 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 bdev_null0 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 [2024-07-25 12:46:44.454762] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 bdev_null1 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 bdev_null2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:11.884 { 00:34:11.884 "params": { 00:34:11.884 "name": "Nvme$subsystem", 00:34:11.884 "trtype": "$TEST_TRANSPORT", 00:34:11.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.884 "adrfam": "ipv4", 00:34:11.884 "trsvcid": "$NVMF_PORT", 00:34:11.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.884 "hdgst": ${hdgst:-false}, 00:34:11.884 "ddgst": ${ddgst:-false} 00:34:11.884 }, 00:34:11.884 "method": "bdev_nvme_attach_controller" 00:34:11.884 } 00:34:11.884 EOF 00:34:11.884 )") 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:11.884 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:11.885 { 00:34:11.885 "params": { 00:34:11.885 "name": "Nvme$subsystem", 00:34:11.885 "trtype": "$TEST_TRANSPORT", 00:34:11.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.885 "adrfam": "ipv4", 00:34:11.885 "trsvcid": "$NVMF_PORT", 00:34:11.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.885 "hdgst": ${hdgst:-false}, 00:34:11.885 "ddgst": ${ddgst:-false} 00:34:11.885 }, 00:34:11.885 "method": "bdev_nvme_attach_controller" 00:34:11.885 } 00:34:11.885 EOF 00:34:11.885 )") 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:11.885 { 00:34:11.885 "params": { 00:34:11.885 "name": "Nvme$subsystem", 00:34:11.885 "trtype": "$TEST_TRANSPORT", 00:34:11.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.885 "adrfam": "ipv4", 00:34:11.885 "trsvcid": "$NVMF_PORT", 00:34:11.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.885 "hdgst": ${hdgst:-false}, 00:34:11.885 "ddgst": ${ddgst:-false} 00:34:11.885 }, 00:34:11.885 "method": "bdev_nvme_attach_controller" 00:34:11.885 } 00:34:11.885 EOF 00:34:11.885 )") 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:11.885 "params": { 00:34:11.885 "name": "Nvme0", 00:34:11.885 "trtype": "tcp", 00:34:11.885 "traddr": "10.0.0.2", 00:34:11.885 "adrfam": "ipv4", 00:34:11.885 "trsvcid": "4420", 00:34:11.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:11.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:11.885 "hdgst": false, 00:34:11.885 "ddgst": false 00:34:11.885 }, 00:34:11.885 "method": "bdev_nvme_attach_controller" 00:34:11.885 },{ 00:34:11.885 "params": { 00:34:11.885 "name": "Nvme1", 00:34:11.885 "trtype": "tcp", 00:34:11.885 "traddr": "10.0.0.2", 00:34:11.885 "adrfam": "ipv4", 00:34:11.885 "trsvcid": "4420", 00:34:11.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:11.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:11.885 "hdgst": false, 00:34:11.885 "ddgst": false 00:34:11.885 }, 00:34:11.885 "method": "bdev_nvme_attach_controller" 00:34:11.885 },{ 00:34:11.885 "params": { 00:34:11.885 "name": "Nvme2", 00:34:11.885 "trtype": "tcp", 00:34:11.885 "traddr": "10.0.0.2", 00:34:11.885 "adrfam": "ipv4", 00:34:11.885 "trsvcid": "4420", 00:34:11.885 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:11.885 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:11.885 "hdgst": false, 00:34:11.885 "ddgst": false 00:34:11.885 }, 00:34:11.885 "method": "bdev_nvme_attach_controller" 00:34:11.885 }' 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:11.885 12:46:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.885 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:11.885 ... 00:34:11.885 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:11.885 ... 00:34:11.885 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:11.885 ... 00:34:11.885 fio-3.35 00:34:11.885 Starting 24 threads 00:34:11.885 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.116 00:34:24.116 filename0: (groupid=0, jobs=1): err= 0: pid=643613: Thu Jul 25 12:46:55 2024 00:34:24.116 read: IOPS=544, BW=2178KiB/s (2230kB/s)(21.3MiB/10022msec) 00:34:24.116 slat (nsec): min=7267, max=93763, avg=9167.64, stdev=4504.56 00:34:24.116 clat (usec): min=3186, max=30503, avg=29308.11, stdev=2418.17 00:34:24.116 lat (usec): min=3194, max=30512, avg=29317.28, stdev=2416.99 00:34:24.116 clat percentiles (usec): 00:34:24.116 | 1.00th=[17171], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:24.116 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:24.116 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:34:24.116 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:34:24.116 | 99.99th=[30540] 00:34:24.116 bw ( KiB/s): min= 2048, max= 2432, per=4.21%, avg=2176.00, stdev=83.06, samples=20 00:34:24.116 iops : min= 512, max= 608, avg=544.00, stdev=20.76, samples=20 00:34:24.116 lat (msec) : 4=0.16%, 10=0.46%, 20=0.97%, 50=98.41% 00:34:24.116 cpu : usr=98.93%, sys=0.70%, ctx=70, majf=0, minf=70 00:34:24.116 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:24.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.116 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.116 issued rwts: total=5456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.116 filename0: (groupid=0, jobs=1): err= 0: pid=643614: Thu Jul 25 12:46:55 2024 00:34:24.116 read: IOPS=538, BW=2154KiB/s (2206kB/s)(21.1MiB/10012msec) 00:34:24.116 slat (nsec): min=5769, max=75729, avg=27325.96, stdev=13331.63 00:34:24.116 clat (usec): min=20060, max=30854, avg=29466.59, stdev=661.63 00:34:24.116 lat (usec): min=20073, max=30884, avg=29493.92, stdev=661.52 00:34:24.116 clat percentiles (usec): 00:34:24.116 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29230], 00:34:24.116 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.116 | 70.00th=[29492], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.116 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:34:24.116 | 99.99th=[30802] 00:34:24.116 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2149.05, stdev=53.61, samples=19 00:34:24.116 iops : min= 512, max= 544, avg=537.26, stdev=13.40, samples=19 00:34:24.116 lat (msec) : 50=100.00% 00:34:24.116 cpu : usr=98.57%, sys=0.94%, ctx=65, majf=0, minf=48 00:34:24.116 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:24.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.116 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.116 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.116 filename0: (groupid=0, jobs=1): err= 0: pid=643615: Thu Jul 25 12:46:55 2024 00:34:24.116 read: IOPS=537, BW=2149KiB/s (2201kB/s)(21.0MiB/10005msec) 00:34:24.116 slat (nsec): min=5930, max=65873, avg=9737.59, stdev=5594.09 00:34:24.116 clat (usec): min=15226, max=57015, avg=29697.84, stdev=2118.06 00:34:24.116 lat (usec): min=15234, max=57030, avg=29707.58, stdev=2118.01 00:34:24.116 clat percentiles (usec): 00:34:24.116 | 1.00th=[21890], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:24.116 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:24.116 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:34:24.116 | 99.00th=[37487], 99.50th=[38536], 99.90th=[56886], 99.95th=[56886], 00:34:24.116 | 99.99th=[56886] 00:34:24.116 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2142.32, stdev=70.53, samples=19 00:34:24.116 iops : min= 480, max= 544, avg=535.58, stdev=17.63, samples=19 00:34:24.116 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:34:24.116 cpu : usr=98.60%, sys=0.88%, ctx=83, majf=0, minf=58 00:34:24.116 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:34:24.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.116 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.116 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.116 filename0: (groupid=0, jobs=1): err= 0: pid=643616: Thu Jul 25 12:46:55 2024 00:34:24.116 read: IOPS=539, BW=2158KiB/s (2210kB/s)(21.1MiB/10022msec) 00:34:24.116 slat (nsec): min=7256, max=74521, avg=15449.58, stdev=10947.55 00:34:24.116 clat (usec): min=18105, max=30782, avg=29529.33, stdev=1000.99 00:34:24.116 lat (usec): min=18124, max=30793, avg=29544.78, stdev=999.87 00:34:24.116 clat percentiles (usec): 00:34:24.116 | 1.00th=[22152], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:24.116 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:34:24.116 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:34:24.116 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:34:24.116 | 99.99th=[30802] 00:34:24.116 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2156.80, stdev=46.89, samples=20 00:34:24.116 iops : min= 512, max= 544, avg=539.20, stdev=11.72, samples=20 00:34:24.116 lat (msec) : 20=0.30%, 50=99.70% 00:34:24.116 cpu : usr=98.92%, sys=0.73%, ctx=91, majf=0, minf=69 00:34:24.116 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:24.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.116 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.116 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.116 filename0: (groupid=0, jobs=1): err= 0: pid=643617: Thu Jul 25 12:46:55 2024 00:34:24.116 read: IOPS=538, BW=2156KiB/s (2208kB/s)(21.1MiB/10004msec) 00:34:24.116 slat (nsec): min=6226, max=69366, avg=22298.22, stdev=13172.30 00:34:24.116 clat (usec): min=4245, max=57899, avg=29512.40, stdev=2149.12 00:34:24.116 lat (usec): min=4256, max=57916, avg=29534.70, stdev=2148.94 00:34:24.116 clat percentiles (usec): 00:34:24.116 | 1.00th=[25822], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:24.116 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.116 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:34:24.116 | 99.00th=[30540], 99.50th=[38011], 99.90th=[52167], 99.95th=[52167], 00:34:24.116 | 99.99th=[57934] 00:34:24.116 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2142.47, stdev=71.42, samples=19 00:34:24.116 iops : min= 480, max= 544, avg=535.58, stdev=17.98, samples=19 00:34:24.116 lat (msec) : 10=0.30%, 20=0.30%, 50=99.11%, 100=0.30% 00:34:24.116 cpu : usr=98.48%, sys=0.91%, ctx=90, majf=0, minf=49 00:34:24.116 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:24.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.116 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.116 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.116 filename0: (groupid=0, jobs=1): err= 0: pid=643618: Thu Jul 25 12:46:55 2024 00:34:24.116 read: IOPS=537, BW=2150KiB/s (2202kB/s)(21.0MiB/10001msec) 00:34:24.116 slat (nsec): min=5870, max=82653, avg=20466.15, stdev=15113.68 00:34:24.117 clat (usec): min=18950, max=44533, avg=29603.51, stdev=934.66 00:34:24.117 lat (usec): min=18958, max=44552, avg=29623.98, stdev=934.16 00:34:24.117 clat percentiles (usec): 00:34:24.117 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:24.117 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:34:24.117 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:34:24.117 | 99.00th=[30540], 99.50th=[32637], 99.90th=[40109], 99.95th=[44303], 00:34:24.117 | 99.99th=[44303] 00:34:24.117 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2149.05, stdev=51.72, samples=19 00:34:24.117 iops : min= 512, max= 544, avg=537.26, stdev=12.93, samples=19 00:34:24.117 lat (msec) : 20=0.32%, 50=99.68% 00:34:24.117 cpu : usr=99.14%, sys=0.57%, ctx=16, majf=0, minf=33 00:34:24.117 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:24.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.117 filename0: (groupid=0, jobs=1): err= 0: pid=643619: Thu Jul 25 12:46:55 2024 00:34:24.117 read: IOPS=537, BW=2150KiB/s (2202kB/s)(21.0MiB/10001msec) 00:34:24.117 slat (nsec): min=6111, max=62594, avg=22288.51, stdev=10500.75 00:34:24.117 clat (usec): min=15058, max=52808, avg=29565.28, stdev=1560.80 00:34:24.117 lat (usec): min=15080, max=52825, avg=29587.57, stdev=1560.17 00:34:24.117 clat percentiles (usec): 00:34:24.117 | 1.00th=[28443], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:24.117 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.117 | 70.00th=[29492], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.117 | 99.00th=[30278], 99.50th=[31065], 99.90th=[52691], 99.95th=[52691], 00:34:24.117 | 99.99th=[52691] 00:34:24.117 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2142.32, stdev=71.93, samples=19 00:34:24.117 iops : min= 480, max= 544, avg=535.58, stdev=17.98, samples=19 00:34:24.117 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:34:24.117 cpu : usr=98.96%, sys=0.71%, ctx=88, majf=0, minf=51 00:34:24.117 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:24.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.117 filename0: (groupid=0, jobs=1): err= 0: pid=643620: Thu Jul 25 12:46:55 2024 00:34:24.117 read: IOPS=542, BW=2169KiB/s (2221kB/s)(21.2MiB/10023msec) 00:34:24.117 slat (nsec): min=6286, max=76415, avg=19053.36, stdev=12886.24 00:34:24.117 clat (usec): min=12040, max=42609, avg=29346.76, stdev=2039.78 00:34:24.117 lat (usec): min=12048, max=42620, avg=29365.82, stdev=2041.12 00:34:24.117 clat percentiles (usec): 00:34:24.117 | 1.00th=[18220], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:24.117 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.117 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:34:24.117 | 99.00th=[30802], 99.50th=[40109], 99.90th=[41157], 99.95th=[41681], 00:34:24.117 | 99.99th=[42730] 00:34:24.117 bw ( KiB/s): min= 2048, max= 2412, per=4.20%, avg=2168.21, stdev=76.09, samples=19 00:34:24.117 iops : min= 512, max= 603, avg=542.05, stdev=19.02, samples=19 00:34:24.117 lat (msec) : 20=2.15%, 50=97.85% 00:34:24.117 cpu : usr=98.58%, sys=0.84%, ctx=105, majf=0, minf=59 00:34:24.117 IO depths : 1=4.9%, 2=11.0%, 4=24.4%, 8=52.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:34:24.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 issued rwts: total=5435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.117 filename1: (groupid=0, jobs=1): err= 0: pid=643621: Thu Jul 25 12:46:55 2024 00:34:24.117 read: IOPS=541, BW=2166KiB/s (2218kB/s)(21.2MiB/10015msec) 00:34:24.117 slat (nsec): min=7444, max=75906, avg=23739.92, stdev=10924.54 00:34:24.117 clat (usec): min=7278, max=30846, avg=29336.68, stdev=1816.78 00:34:24.117 lat (usec): min=7290, max=30855, avg=29360.42, stdev=1817.52 00:34:24.117 clat percentiles (usec): 00:34:24.117 | 1.00th=[19006], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:24.117 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.117 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.117 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30802], 99.95th=[30802], 00:34:24.117 | 99.99th=[30802] 00:34:24.117 bw ( KiB/s): min= 2048, max= 2432, per=4.19%, avg=2163.20, stdev=82.01, samples=20 00:34:24.117 iops : min= 512, max= 608, avg=540.80, stdev=20.50, samples=20 00:34:24.117 lat (msec) : 10=0.29%, 20=0.88%, 50=98.82% 00:34:24.117 cpu : usr=98.84%, sys=0.85%, ctx=46, majf=0, minf=52 00:34:24.117 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:24.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.117 filename1: (groupid=0, jobs=1): err= 0: pid=643622: Thu Jul 25 12:46:55 2024 00:34:24.117 read: IOPS=542, BW=2170KiB/s (2222kB/s)(21.2MiB/10028msec) 00:34:24.117 slat (nsec): min=5648, max=82574, avg=32056.15, stdev=16306.58 00:34:24.117 clat (usec): min=6693, max=30786, avg=29186.00, stdev=2025.98 00:34:24.117 lat (usec): min=6704, max=30815, avg=29218.06, stdev=2028.38 00:34:24.117 clat percentiles (usec): 00:34:24.117 | 1.00th=[17433], 5.00th=[29230], 10.00th=[29230], 20.00th=[29230], 00:34:24.117 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:34:24.117 | 70.00th=[29492], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.117 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30540], 99.95th=[30802], 00:34:24.117 | 99.99th=[30802] 00:34:24.117 bw ( KiB/s): min= 2048, max= 2432, per=4.20%, avg=2169.60, stdev=77.42, samples=20 00:34:24.117 iops : min= 512, max= 608, avg=542.40, stdev=19.35, samples=20 00:34:24.117 lat (msec) : 10=0.29%, 20=1.36%, 50=98.35% 00:34:24.117 cpu : usr=98.95%, sys=0.71%, ctx=48, majf=0, minf=60 00:34:24.117 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:24.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.117 filename1: (groupid=0, jobs=1): err= 0: pid=643623: Thu Jul 25 12:46:55 2024 00:34:24.117 read: IOPS=538, BW=2156KiB/s (2208kB/s)(21.1MiB/10004msec) 00:34:24.117 slat (nsec): min=12076, max=85614, avg=32568.58, stdev=12029.78 00:34:24.117 clat (usec): min=3463, max=51522, avg=29363.90, stdev=2185.84 00:34:24.117 lat (usec): min=3487, max=51556, avg=29396.46, stdev=2185.89 00:34:24.117 clat percentiles (usec): 00:34:24.117 | 1.00th=[22152], 5.00th=[29230], 10.00th=[29230], 20.00th=[29230], 00:34:24.117 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:34:24.117 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[29754], 00:34:24.117 | 99.00th=[30802], 99.50th=[36963], 99.90th=[51643], 99.95th=[51643], 00:34:24.117 | 99.99th=[51643] 00:34:24.117 bw ( KiB/s): min= 1923, max= 2192, per=4.14%, avg=2142.47, stdev=70.21, samples=19 00:34:24.117 iops : min= 480, max= 548, avg=535.58, stdev=17.68, samples=19 00:34:24.117 lat (msec) : 4=0.24%, 10=0.06%, 20=0.33%, 50=99.07%, 100=0.30% 00:34:24.117 cpu : usr=98.01%, sys=1.33%, ctx=40, majf=0, minf=34 00:34:24.117 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:34:24.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.117 filename1: (groupid=0, jobs=1): err= 0: pid=643624: Thu Jul 25 12:46:55 2024 00:34:24.117 read: IOPS=538, BW=2152KiB/s (2204kB/s)(21.1MiB/10021msec) 00:34:24.117 slat (usec): min=7, max=103, avg=30.09, stdev=18.95 00:34:24.117 clat (usec): min=22054, max=31038, avg=29453.99, stdev=473.03 00:34:24.117 lat (usec): min=22063, max=31088, avg=29484.08, stdev=473.49 00:34:24.117 clat percentiles (usec): 00:34:24.117 | 1.00th=[28705], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:34:24.117 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.117 | 70.00th=[29492], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.117 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30802], 99.95th=[31065], 00:34:24.117 | 99.99th=[31065] 00:34:24.117 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2149.26, stdev=53.20, samples=19 00:34:24.117 iops : min= 512, max= 544, avg=537.32, stdev=13.30, samples=19 00:34:24.117 lat (msec) : 50=100.00% 00:34:24.117 cpu : usr=98.94%, sys=0.66%, ctx=14, majf=0, minf=55 00:34:24.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:24.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.117 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.117 filename1: (groupid=0, jobs=1): err= 0: pid=643625: Thu Jul 25 12:46:55 2024 00:34:24.117 read: IOPS=538, BW=2152KiB/s (2204kB/s)(21.1MiB/10021msec) 00:34:24.117 slat (usec): min=12, max=102, avg=34.57, stdev=15.76 00:34:24.117 clat (usec): min=21842, max=31091, avg=29396.15, stdev=481.84 00:34:24.117 lat (usec): min=21869, max=31118, avg=29430.73, stdev=483.37 00:34:24.117 clat percentiles (usec): 00:34:24.117 | 1.00th=[28705], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:34:24.117 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:34:24.117 | 70.00th=[29492], 80.00th=[29492], 90.00th=[29754], 95.00th=[29754], 00:34:24.118 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30802], 99.95th=[31065], 00:34:24.118 | 99.99th=[31065] 00:34:24.118 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2149.26, stdev=53.20, samples=19 00:34:24.118 iops : min= 512, max= 544, avg=537.32, stdev=13.30, samples=19 00:34:24.118 lat (msec) : 50=100.00% 00:34:24.118 cpu : usr=98.13%, sys=1.17%, ctx=47, majf=0, minf=38 00:34:24.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:24.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.118 filename1: (groupid=0, jobs=1): err= 0: pid=643626: Thu Jul 25 12:46:55 2024 00:34:24.118 read: IOPS=537, BW=2150KiB/s (2202kB/s)(21.0MiB/10001msec) 00:34:24.118 slat (nsec): min=5376, max=86332, avg=23670.15, stdev=16002.68 00:34:24.118 clat (usec): min=29055, max=32775, avg=29539.14, stdev=278.44 00:34:24.118 lat (usec): min=29068, max=32791, avg=29562.81, stdev=277.26 00:34:24.118 clat percentiles (usec): 00:34:24.118 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29230], 00:34:24.118 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.118 | 70.00th=[29492], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.118 | 99.00th=[30278], 99.50th=[30802], 99.90th=[32637], 99.95th=[32637], 00:34:24.118 | 99.99th=[32900] 00:34:24.118 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2149.05, stdev=53.61, samples=19 00:34:24.118 iops : min= 512, max= 544, avg=537.26, stdev=13.40, samples=19 00:34:24.118 lat (msec) : 50=100.00% 00:34:24.118 cpu : usr=98.57%, sys=0.87%, ctx=63, majf=0, minf=36 00:34:24.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:24.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.118 filename1: (groupid=0, jobs=1): err= 0: pid=643627: Thu Jul 25 12:46:55 2024 00:34:24.118 read: IOPS=537, BW=2150KiB/s (2202kB/s)(21.0MiB/10001msec) 00:34:24.118 slat (nsec): min=5817, max=70787, avg=24956.28, stdev=12243.20 00:34:24.118 clat (usec): min=15014, max=52619, avg=29546.28, stdev=1524.40 00:34:24.118 lat (usec): min=15031, max=52635, avg=29571.24, stdev=1523.54 00:34:24.118 clat percentiles (usec): 00:34:24.118 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:24.118 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.118 | 70.00th=[29492], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.118 | 99.00th=[30278], 99.50th=[30802], 99.90th=[52691], 99.95th=[52691], 00:34:24.118 | 99.99th=[52691] 00:34:24.118 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2142.32, stdev=71.93, samples=19 00:34:24.118 iops : min= 480, max= 544, avg=535.58, stdev=17.98, samples=19 00:34:24.118 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:34:24.118 cpu : usr=98.86%, sys=0.64%, ctx=95, majf=0, minf=42 00:34:24.118 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:24.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.118 filename1: (groupid=0, jobs=1): err= 0: pid=643628: Thu Jul 25 12:46:55 2024 00:34:24.118 read: IOPS=537, BW=2150KiB/s (2202kB/s)(21.0MiB/10002msec) 00:34:24.118 slat (nsec): min=6231, max=65110, avg=24103.77, stdev=11395.26 00:34:24.118 clat (usec): min=14998, max=53840, avg=29550.28, stdev=1698.37 00:34:24.118 lat (usec): min=15015, max=53857, avg=29574.38, stdev=1697.54 00:34:24.118 clat percentiles (usec): 00:34:24.118 | 1.00th=[28181], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:24.118 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.118 | 70.00th=[29492], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.118 | 99.00th=[30540], 99.50th=[38011], 99.90th=[53740], 99.95th=[53740], 00:34:24.118 | 99.99th=[53740] 00:34:24.118 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2142.47, stdev=70.01, samples=19 00:34:24.118 iops : min= 480, max= 544, avg=535.58, stdev=17.63, samples=19 00:34:24.118 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:34:24.118 cpu : usr=99.25%, sys=0.47%, ctx=9, majf=0, minf=43 00:34:24.118 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:24.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.118 filename2: (groupid=0, jobs=1): err= 0: pid=643629: Thu Jul 25 12:46:55 2024 00:34:24.118 read: IOPS=542, BW=2171KiB/s (2224kB/s)(21.2MiB/10021msec) 00:34:24.118 slat (nsec): min=7257, max=93735, avg=12954.24, stdev=9206.08 00:34:24.118 clat (usec): min=5976, max=30834, avg=29369.47, stdev=2259.29 00:34:24.118 lat (usec): min=5989, max=30844, avg=29382.43, stdev=2258.03 00:34:24.118 clat percentiles (usec): 00:34:24.118 | 1.00th=[17171], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:24.118 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:24.118 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:34:24.118 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:34:24.118 | 99.99th=[30802] 00:34:24.118 bw ( KiB/s): min= 2048, max= 2436, per=4.20%, avg=2169.80, stdev=78.13, samples=20 00:34:24.118 iops : min= 512, max= 609, avg=542.45, stdev=19.53, samples=20 00:34:24.118 lat (msec) : 10=0.59%, 20=0.88%, 50=98.53% 00:34:24.118 cpu : usr=98.96%, sys=0.66%, ctx=49, majf=0, minf=79 00:34:24.118 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:24.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.118 filename2: (groupid=0, jobs=1): err= 0: pid=643630: Thu Jul 25 12:46:55 2024 00:34:24.118 read: IOPS=539, BW=2158KiB/s (2210kB/s)(21.1MiB/10022msec) 00:34:24.118 slat (nsec): min=7340, max=78449, avg=23605.14, stdev=13671.33 00:34:24.118 clat (usec): min=18281, max=30914, avg=29466.06, stdev=989.58 00:34:24.118 lat (usec): min=18295, max=30929, avg=29489.67, stdev=988.85 00:34:24.118 clat percentiles (usec): 00:34:24.118 | 1.00th=[22414], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:24.118 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.118 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.118 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:34:24.118 | 99.99th=[30802] 00:34:24.118 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2156.80, stdev=46.89, samples=20 00:34:24.118 iops : min= 512, max= 544, avg=539.20, stdev=11.72, samples=20 00:34:24.118 lat (msec) : 20=0.30%, 50=99.70% 00:34:24.118 cpu : usr=99.04%, sys=0.58%, ctx=44, majf=0, minf=41 00:34:24.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:24.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.118 filename2: (groupid=0, jobs=1): err= 0: pid=643631: Thu Jul 25 12:46:55 2024 00:34:24.118 read: IOPS=537, BW=2150KiB/s (2202kB/s)(21.0MiB/10001msec) 00:34:24.118 slat (nsec): min=7270, max=74042, avg=26050.08, stdev=13692.80 00:34:24.118 clat (usec): min=22309, max=40311, avg=29506.99, stdev=726.70 00:34:24.118 lat (usec): min=22318, max=40334, avg=29533.04, stdev=727.21 00:34:24.118 clat percentiles (usec): 00:34:24.118 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29230], 00:34:24.118 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.118 | 70.00th=[29492], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.118 | 99.00th=[30278], 99.50th=[30540], 99.90th=[40109], 99.95th=[40109], 00:34:24.118 | 99.99th=[40109] 00:34:24.118 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2149.05, stdev=53.61, samples=19 00:34:24.118 iops : min= 512, max= 544, avg=537.26, stdev=13.40, samples=19 00:34:24.118 lat (msec) : 50=100.00% 00:34:24.118 cpu : usr=98.96%, sys=0.66%, ctx=77, majf=0, minf=40 00:34:24.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:24.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.118 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.118 filename2: (groupid=0, jobs=1): err= 0: pid=643632: Thu Jul 25 12:46:55 2024 00:34:24.118 read: IOPS=538, BW=2154KiB/s (2206kB/s)(21.1MiB/10012msec) 00:34:24.118 slat (nsec): min=5579, max=73757, avg=23572.27, stdev=11180.78 00:34:24.118 clat (usec): min=20128, max=30826, avg=29486.96, stdev=658.84 00:34:24.118 lat (usec): min=20142, max=30858, avg=29510.53, stdev=659.25 00:34:24.118 clat percentiles (usec): 00:34:24.118 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:24.118 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.118 | 70.00th=[29492], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.118 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:34:24.118 | 99.99th=[30802] 00:34:24.118 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2149.05, stdev=53.61, samples=19 00:34:24.119 iops : min= 512, max= 544, avg=537.26, stdev=13.40, samples=19 00:34:24.119 lat (msec) : 50=100.00% 00:34:24.119 cpu : usr=98.70%, sys=0.77%, ctx=196, majf=0, minf=34 00:34:24.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:24.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.119 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.119 filename2: (groupid=0, jobs=1): err= 0: pid=643633: Thu Jul 25 12:46:55 2024 00:34:24.119 read: IOPS=537, BW=2150KiB/s (2201kB/s)(21.0MiB/10003msec) 00:34:24.119 slat (nsec): min=5665, max=64209, avg=14530.54, stdev=9793.82 00:34:24.119 clat (usec): min=15358, max=54798, avg=29659.84, stdev=1808.18 00:34:24.119 lat (usec): min=15377, max=54814, avg=29674.37, stdev=1807.95 00:34:24.119 clat percentiles (usec): 00:34:24.119 | 1.00th=[25822], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:24.119 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:24.119 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:34:24.119 | 99.00th=[31065], 99.50th=[38011], 99.90th=[54789], 99.95th=[54789], 00:34:24.119 | 99.99th=[54789] 00:34:24.119 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2142.32, stdev=71.93, samples=19 00:34:24.119 iops : min= 480, max= 544, avg=535.58, stdev=17.98, samples=19 00:34:24.119 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:34:24.119 cpu : usr=98.44%, sys=1.03%, ctx=97, majf=0, minf=35 00:34:24.119 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:34:24.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.119 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.119 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.119 filename2: (groupid=0, jobs=1): err= 0: pid=643634: Thu Jul 25 12:46:55 2024 00:34:24.119 read: IOPS=539, BW=2158KiB/s (2210kB/s)(21.1MiB/10022msec) 00:34:24.119 slat (nsec): min=5580, max=73528, avg=23420.06, stdev=10593.36 00:34:24.119 clat (usec): min=18556, max=30813, avg=29439.41, stdev=993.24 00:34:24.119 lat (usec): min=18574, max=30845, avg=29462.83, stdev=993.75 00:34:24.119 clat percentiles (usec): 00:34:24.119 | 1.00th=[22414], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:24.119 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.119 | 70.00th=[29492], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.119 | 99.00th=[30016], 99.50th=[30540], 99.90th=[30540], 99.95th=[30802], 00:34:24.119 | 99.99th=[30802] 00:34:24.119 bw ( KiB/s): min= 2048, max= 2180, per=4.17%, avg=2157.00, stdev=46.99, samples=20 00:34:24.119 iops : min= 512, max= 545, avg=539.25, stdev=11.75, samples=20 00:34:24.119 lat (msec) : 20=0.30%, 50=99.70% 00:34:24.119 cpu : usr=97.64%, sys=1.38%, ctx=703, majf=0, minf=42 00:34:24.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:24.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.119 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.119 filename2: (groupid=0, jobs=1): err= 0: pid=643635: Thu Jul 25 12:46:55 2024 00:34:24.119 read: IOPS=538, BW=2155KiB/s (2207kB/s)(21.1MiB/10009msec) 00:34:24.119 slat (nsec): min=6038, max=84474, avg=18840.40, stdev=14687.07 00:34:24.119 clat (usec): min=18454, max=40327, avg=29554.28, stdev=1434.43 00:34:24.119 lat (usec): min=18479, max=40338, avg=29573.12, stdev=1434.11 00:34:24.119 clat percentiles (usec): 00:34:24.119 | 1.00th=[20579], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:24.119 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:34:24.119 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:34:24.119 | 99.00th=[30540], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:34:24.119 | 99.99th=[40109] 00:34:24.119 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2155.79, stdev=47.95, samples=19 00:34:24.119 iops : min= 512, max= 544, avg=538.95, stdev=11.99, samples=19 00:34:24.119 lat (msec) : 20=0.72%, 50=99.28% 00:34:24.119 cpu : usr=99.12%, sys=0.58%, ctx=13, majf=0, minf=41 00:34:24.119 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:24.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.119 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.119 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.119 filename2: (groupid=0, jobs=1): err= 0: pid=643636: Thu Jul 25 12:46:55 2024 00:34:24.119 read: IOPS=538, BW=2156KiB/s (2208kB/s)(21.1MiB/10004msec) 00:34:24.119 slat (nsec): min=6158, max=68668, avg=25298.55, stdev=12349.21 00:34:24.119 clat (usec): min=4024, max=51946, avg=29449.15, stdev=2032.63 00:34:24.119 lat (usec): min=4034, max=51968, avg=29474.45, stdev=2032.71 00:34:24.119 clat percentiles (usec): 00:34:24.119 | 1.00th=[28181], 5.00th=[29230], 10.00th=[29230], 20.00th=[29230], 00:34:24.119 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:34:24.119 | 70.00th=[29492], 80.00th=[29754], 90.00th=[29754], 95.00th=[29754], 00:34:24.119 | 99.00th=[30278], 99.50th=[30802], 99.90th=[52167], 99.95th=[52167], 00:34:24.119 | 99.99th=[52167] 00:34:24.119 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2142.47, stdev=71.42, samples=19 00:34:24.119 iops : min= 480, max= 544, avg=535.58, stdev=17.98, samples=19 00:34:24.119 lat (msec) : 10=0.30%, 20=0.30%, 50=99.11%, 100=0.30% 00:34:24.119 cpu : usr=98.54%, sys=0.92%, ctx=79, majf=0, minf=28 00:34:24.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:24.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.119 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:24.119 00:34:24.119 Run status group 0 (all jobs): 00:34:24.119 READ: bw=50.5MiB/s (52.9MB/s), 2149KiB/s-2178KiB/s (2201kB/s-2230kB/s), io=506MiB (531MB), run=10001-10028msec 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:24.119 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.120 bdev_null0 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.120 [2024-07-25 12:46:56.199357] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.120 bdev_null1 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:24.120 { 00:34:24.120 "params": { 00:34:24.120 "name": "Nvme$subsystem", 00:34:24.120 "trtype": "$TEST_TRANSPORT", 00:34:24.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:24.120 "adrfam": "ipv4", 00:34:24.120 "trsvcid": "$NVMF_PORT", 00:34:24.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:24.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:24.120 "hdgst": ${hdgst:-false}, 00:34:24.120 "ddgst": ${ddgst:-false} 00:34:24.120 }, 00:34:24.120 "method": "bdev_nvme_attach_controller" 00:34:24.120 } 00:34:24.120 EOF 00:34:24.120 )") 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:24.120 { 00:34:24.120 "params": { 00:34:24.120 "name": "Nvme$subsystem", 00:34:24.120 "trtype": "$TEST_TRANSPORT", 00:34:24.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:24.120 "adrfam": "ipv4", 00:34:24.120 "trsvcid": "$NVMF_PORT", 00:34:24.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:24.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:24.120 "hdgst": ${hdgst:-false}, 00:34:24.120 "ddgst": ${ddgst:-false} 00:34:24.120 }, 00:34:24.120 "method": "bdev_nvme_attach_controller" 00:34:24.120 } 00:34:24.120 EOF 00:34:24.120 )") 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:24.120 "params": { 00:34:24.120 "name": "Nvme0", 00:34:24.120 "trtype": "tcp", 00:34:24.120 "traddr": "10.0.0.2", 00:34:24.120 "adrfam": "ipv4", 00:34:24.120 "trsvcid": "4420", 00:34:24.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:24.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:24.120 "hdgst": false, 00:34:24.120 "ddgst": false 00:34:24.120 }, 00:34:24.120 "method": "bdev_nvme_attach_controller" 00:34:24.120 },{ 00:34:24.120 "params": { 00:34:24.120 "name": "Nvme1", 00:34:24.120 "trtype": "tcp", 00:34:24.120 "traddr": "10.0.0.2", 00:34:24.120 "adrfam": "ipv4", 00:34:24.120 "trsvcid": "4420", 00:34:24.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:24.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:24.120 "hdgst": false, 00:34:24.120 "ddgst": false 00:34:24.120 }, 00:34:24.120 "method": "bdev_nvme_attach_controller" 00:34:24.120 }' 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:24.120 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:24.121 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:24.121 12:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.121 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:24.121 ... 00:34:24.121 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:24.121 ... 00:34:24.121 fio-3.35 00:34:24.121 Starting 4 threads 00:34:24.121 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.405 00:34:29.405 filename0: (groupid=0, jobs=1): err= 0: pid=645610: Thu Jul 25 12:47:02 2024 00:34:29.405 read: IOPS=2313, BW=18.1MiB/s (18.9MB/s)(90.4MiB/5003msec) 00:34:29.405 slat (nsec): min=7211, max=76900, avg=8278.54, stdev=3315.89 00:34:29.405 clat (usec): min=1681, max=6399, avg=3438.43, stdev=503.94 00:34:29.405 lat (usec): min=1693, max=6409, avg=3446.71, stdev=504.06 00:34:29.405 clat percentiles (usec): 00:34:29.405 | 1.00th=[ 2442], 5.00th=[ 2835], 10.00th=[ 3032], 20.00th=[ 3195], 00:34:29.405 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3359], 60.00th=[ 3425], 00:34:29.405 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3818], 95.00th=[ 4817], 00:34:29.405 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 5538], 99.95th=[ 5604], 00:34:29.405 | 99.99th=[ 6259] 00:34:29.405 bw ( KiB/s): min=18032, max=18832, per=25.70%, avg=18510.40, stdev=283.97, samples=10 00:34:29.405 iops : min= 2254, max= 2354, avg=2313.80, stdev=35.50, samples=10 00:34:29.405 lat (msec) : 2=0.04%, 4=90.91%, 10=9.05% 00:34:29.405 cpu : usr=96.76%, sys=2.96%, ctx=7, majf=0, minf=0 00:34:29.405 IO depths : 1=0.1%, 2=0.3%, 4=68.1%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.405 complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.405 issued rwts: total=11572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.405 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:29.405 filename0: (groupid=0, jobs=1): err= 0: pid=645611: Thu Jul 25 12:47:02 2024 00:34:29.405 read: IOPS=2254, BW=17.6MiB/s (18.5MB/s)(88.8MiB/5042msec) 00:34:29.405 slat (nsec): min=7208, max=44300, avg=8237.04, stdev=3022.05 00:34:29.405 clat (usec): min=1502, max=42015, avg=3509.99, stdev=801.83 00:34:29.406 lat (usec): min=1509, max=42023, avg=3518.23, stdev=801.76 00:34:29.406 clat percentiles (usec): 00:34:29.406 | 1.00th=[ 2704], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3228], 00:34:29.406 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3392], 60.00th=[ 3458], 00:34:29.406 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3916], 95.00th=[ 4883], 00:34:29.406 | 99.00th=[ 5211], 99.50th=[ 5407], 99.90th=[ 5866], 99.95th=[ 5997], 00:34:29.406 | 99.99th=[42206] 00:34:29.406 bw ( KiB/s): min=17616, max=18864, per=25.25%, avg=18187.70, stdev=451.45, samples=10 00:34:29.406 iops : min= 2202, max= 2358, avg=2273.40, stdev=56.36, samples=10 00:34:29.406 lat (msec) : 2=0.08%, 4=90.37%, 10=9.53%, 50=0.03% 00:34:29.406 cpu : usr=96.49%, sys=3.23%, ctx=7, majf=0, minf=9 00:34:29.406 IO depths : 1=0.1%, 2=0.2%, 4=68.3%, 8=31.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.406 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.406 issued rwts: total=11368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.406 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:29.406 filename1: (groupid=0, jobs=1): err= 0: pid=645612: Thu Jul 25 12:47:02 2024 00:34:29.406 read: IOPS=2232, BW=17.4MiB/s (18.3MB/s)(87.2MiB/5002msec) 00:34:29.406 slat (nsec): min=7207, max=54807, avg=8196.46, stdev=3003.53 00:34:29.406 clat (usec): min=1119, max=6647, avg=3562.28, stdev=599.71 00:34:29.406 lat (usec): min=1126, max=6681, avg=3570.48, stdev=599.89 00:34:29.406 clat percentiles (usec): 00:34:29.406 | 1.00th=[ 2638], 5.00th=[ 3064], 10.00th=[ 3163], 20.00th=[ 3228], 00:34:29.406 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3425], 60.00th=[ 3458], 00:34:29.406 | 70.00th=[ 3490], 80.00th=[ 3556], 90.00th=[ 4817], 95.00th=[ 5014], 00:34:29.406 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 5735], 99.95th=[ 6390], 00:34:29.406 | 99.99th=[ 6587] 00:34:29.406 bw ( KiB/s): min=17136, max=18592, per=24.76%, avg=17831.11, stdev=448.01, samples=9 00:34:29.406 iops : min= 2142, max= 2324, avg=2228.89, stdev=56.00, samples=9 00:34:29.406 lat (msec) : 2=0.17%, 4=86.57%, 10=13.26% 00:34:29.406 cpu : usr=96.72%, sys=2.98%, ctx=8, majf=0, minf=11 00:34:29.406 IO depths : 1=0.1%, 2=0.1%, 4=71.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.406 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.406 issued rwts: total=11165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.406 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:29.406 filename1: (groupid=0, jobs=1): err= 0: pid=645613: Thu Jul 25 12:47:02 2024 00:34:29.406 read: IOPS=2255, BW=17.6MiB/s (18.5MB/s)(88.2MiB/5003msec) 00:34:29.406 slat (nsec): min=7208, max=64402, avg=8197.15, stdev=3101.12 00:34:29.406 clat (usec): min=641, max=5783, avg=3525.77, stdev=559.60 00:34:29.406 lat (usec): min=663, max=5790, avg=3533.97, stdev=559.45 00:34:29.406 clat percentiles (usec): 00:34:29.406 | 1.00th=[ 2638], 5.00th=[ 3032], 10.00th=[ 3097], 20.00th=[ 3228], 00:34:29.406 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3392], 60.00th=[ 3458], 00:34:29.406 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 4686], 95.00th=[ 5014], 00:34:29.406 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 5604], 99.95th=[ 5669], 00:34:29.406 | 99.99th=[ 5800] 00:34:29.406 bw ( KiB/s): min=17440, max=19136, per=25.06%, avg=18051.50, stdev=616.77, samples=10 00:34:29.406 iops : min= 2180, max= 2392, avg=2256.40, stdev=77.13, samples=10 00:34:29.406 lat (usec) : 750=0.01% 00:34:29.406 lat (msec) : 2=0.07%, 4=88.64%, 10=11.28% 00:34:29.406 cpu : usr=97.16%, sys=2.54%, ctx=13, majf=0, minf=0 00:34:29.406 IO depths : 1=0.1%, 2=0.1%, 4=69.9%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.406 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.406 issued rwts: total=11285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.406 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:29.406 00:34:29.406 Run status group 0 (all jobs): 00:34:29.406 READ: bw=70.3MiB/s (73.7MB/s), 17.4MiB/s-18.1MiB/s (18.3MB/s-18.9MB/s), io=355MiB (372MB), run=5002-5042msec 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.406 00:34:29.406 real 0m24.446s 00:34:29.406 user 5m2.371s 00:34:29.406 sys 0m4.274s 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:29.406 12:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.406 ************************************ 00:34:29.406 END TEST fio_dif_rand_params 00:34:29.406 ************************************ 00:34:29.406 12:47:02 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:29.406 12:47:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:29.406 12:47:02 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:29.406 12:47:02 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:29.406 12:47:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.406 ************************************ 00:34:29.406 START TEST fio_dif_digest 00:34:29.406 ************************************ 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:29.406 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.407 bdev_null0 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.407 [2024-07-25 12:47:02.719449] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:29.407 { 00:34:29.407 "params": { 00:34:29.407 "name": "Nvme$subsystem", 00:34:29.407 "trtype": "$TEST_TRANSPORT", 00:34:29.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.407 "adrfam": "ipv4", 00:34:29.407 "trsvcid": "$NVMF_PORT", 00:34:29.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.407 "hdgst": ${hdgst:-false}, 00:34:29.407 "ddgst": ${ddgst:-false} 00:34:29.407 }, 00:34:29.407 "method": "bdev_nvme_attach_controller" 00:34:29.407 } 00:34:29.407 EOF 00:34:29.407 )") 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:29.407 "params": { 00:34:29.407 "name": "Nvme0", 00:34:29.407 "trtype": "tcp", 00:34:29.407 "traddr": "10.0.0.2", 00:34:29.407 "adrfam": "ipv4", 00:34:29.407 "trsvcid": "4420", 00:34:29.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.407 "hdgst": true, 00:34:29.407 "ddgst": true 00:34:29.407 }, 00:34:29.407 "method": "bdev_nvme_attach_controller" 00:34:29.407 }' 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:29.407 12:47:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.975 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:29.975 ... 00:34:29.975 fio-3.35 00:34:29.975 Starting 3 threads 00:34:29.975 EAL: No free 2048 kB hugepages reported on node 1 00:34:42.278 00:34:42.278 filename0: (groupid=0, jobs=1): err= 0: pid=646714: Thu Jul 25 12:47:13 2024 00:34:42.278 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(300MiB/10046msec) 00:34:42.278 slat (nsec): min=7537, max=50720, avg=8488.68, stdev=1404.08 00:34:42.278 clat (usec): min=9804, max=56064, avg=12539.80, stdev=1486.07 00:34:42.278 lat (usec): min=9812, max=56072, avg=12548.29, stdev=1486.10 00:34:42.278 clat percentiles (usec): 00:34:42.278 | 1.00th=[10421], 5.00th=[11076], 10.00th=[11338], 20.00th=[11731], 00:34:42.278 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:34:42.278 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:34:42.278 | 99.00th=[14877], 99.50th=[15139], 99.90th=[17171], 99.95th=[49546], 00:34:42.278 | 99.99th=[55837] 00:34:42.278 bw ( KiB/s): min=29952, max=31488, per=33.92%, avg=30668.80, stdev=420.24, samples=20 00:34:42.278 iops : min= 234, max= 246, avg=239.60, stdev= 3.28, samples=20 00:34:42.278 lat (msec) : 10=0.13%, 20=99.79%, 50=0.04%, 100=0.04% 00:34:42.278 cpu : usr=94.59%, sys=5.13%, ctx=24, majf=0, minf=150 00:34:42.278 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.278 issued rwts: total=2398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:42.278 filename0: (groupid=0, jobs=1): err= 0: pid=646715: Thu Jul 25 12:47:13 2024 00:34:42.278 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(295MiB/10046msec) 00:34:42.278 slat (nsec): min=7516, max=36276, avg=8449.61, stdev=1317.44 00:34:42.278 clat (usec): min=9486, max=52460, avg=12763.72, stdev=1457.72 00:34:42.278 lat (usec): min=9495, max=52468, avg=12772.17, stdev=1457.76 00:34:42.278 clat percentiles (usec): 00:34:42.278 | 1.00th=[10421], 5.00th=[11076], 10.00th=[11469], 20.00th=[11994], 00:34:42.278 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:34:42.278 | 70.00th=[13173], 80.00th=[13566], 90.00th=[13960], 95.00th=[14353], 00:34:42.278 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16319], 99.95th=[46924], 00:34:42.278 | 99.99th=[52691] 00:34:42.278 bw ( KiB/s): min=29184, max=31488, per=33.33%, avg=30134.15, stdev=510.20, samples=20 00:34:42.278 iops : min= 228, max= 246, avg=235.40, stdev= 4.01, samples=20 00:34:42.278 lat (msec) : 10=0.25%, 20=99.66%, 50=0.04%, 100=0.04% 00:34:42.278 cpu : usr=95.24%, sys=4.49%, ctx=25, majf=0, minf=124 00:34:42.278 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.278 issued rwts: total=2356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:42.278 filename0: (groupid=0, jobs=1): err= 0: pid=646716: Thu Jul 25 12:47:13 2024 00:34:42.278 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(293MiB/10006msec) 00:34:42.278 slat (nsec): min=7504, max=31751, avg=8472.43, stdev=1220.79 00:34:42.278 clat (usec): min=7006, max=17200, avg=12807.45, stdev=1102.52 00:34:42.278 lat (usec): min=7013, max=17231, avg=12815.93, stdev=1102.59 00:34:42.278 clat percentiles (usec): 00:34:42.278 | 1.00th=[10290], 5.00th=[11076], 10.00th=[11469], 20.00th=[11863], 00:34:42.278 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:34:42.278 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14222], 95.00th=[14615], 00:34:42.278 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16319], 99.95th=[16909], 00:34:42.278 | 99.99th=[17171] 00:34:42.278 bw ( KiB/s): min=28928, max=32512, per=33.11%, avg=29939.20, stdev=861.06, samples=20 00:34:42.278 iops : min= 226, max= 254, avg=233.90, stdev= 6.73, samples=20 00:34:42.278 lat (msec) : 10=0.47%, 20=99.53% 00:34:42.278 cpu : usr=94.76%, sys=4.96%, ctx=27, majf=0, minf=145 00:34:42.278 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.278 issued rwts: total=2342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:42.278 00:34:42.278 Run status group 0 (all jobs): 00:34:42.278 READ: bw=88.3MiB/s (92.6MB/s), 29.3MiB/s-29.8MiB/s (30.7MB/s-31.3MB/s), io=887MiB (930MB), run=10006-10046msec 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.278 00:34:42.278 real 0m11.192s 00:34:42.278 user 0m39.664s 00:34:42.278 sys 0m1.740s 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:42.278 12:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:42.278 ************************************ 00:34:42.278 END TEST fio_dif_digest 00:34:42.278 ************************************ 00:34:42.278 12:47:13 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:42.278 12:47:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:42.278 12:47:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:42.278 12:47:13 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:42.278 12:47:13 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:34:42.278 12:47:13 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:42.278 12:47:13 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:34:42.278 12:47:13 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:42.278 12:47:13 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:42.278 rmmod nvme_tcp 00:34:42.278 rmmod nvme_fabrics 00:34:42.278 rmmod nvme_keyring 00:34:42.278 12:47:13 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:42.278 12:47:13 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:34:42.278 12:47:13 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:34:42.278 12:47:13 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 637783 ']' 00:34:42.278 12:47:13 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 637783 00:34:42.278 12:47:13 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 637783 ']' 00:34:42.278 12:47:13 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 637783 00:34:42.278 12:47:13 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:34:42.278 12:47:13 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:42.278 12:47:13 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 637783 00:34:42.278 12:47:14 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:42.278 12:47:14 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:42.278 12:47:14 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 637783' 00:34:42.278 killing process with pid 637783 00:34:42.278 12:47:14 nvmf_dif -- common/autotest_common.sh@967 -- # kill 637783 00:34:42.278 12:47:14 nvmf_dif -- common/autotest_common.sh@972 -- # wait 637783 00:34:42.278 12:47:14 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:34:42.278 12:47:14 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:44.840 Waiting for block devices as requested 00:34:44.840 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:44.840 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:44.840 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:44.840 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:44.840 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:45.101 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:45.101 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:45.101 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:45.361 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:34:45.361 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:45.361 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:45.622 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:45.622 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:45.622 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:45.882 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:45.882 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:45.882 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:45.882 12:47:19 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:45.882 12:47:19 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:45.882 12:47:19 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:45.882 12:47:19 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:45.882 12:47:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.882 12:47:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:45.882 12:47:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.428 12:47:21 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:48.428 00:34:48.428 real 1m19.485s 00:34:48.428 user 7m32.489s 00:34:48.428 sys 0m21.727s 00:34:48.428 12:47:21 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:48.428 12:47:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:48.428 ************************************ 00:34:48.428 END TEST nvmf_dif 00:34:48.428 ************************************ 00:34:48.428 12:47:21 -- common/autotest_common.sh@1142 -- # return 0 00:34:48.428 12:47:21 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:48.428 12:47:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:48.428 12:47:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:48.428 12:47:21 -- common/autotest_common.sh@10 -- # set +x 00:34:48.428 ************************************ 00:34:48.428 START TEST nvmf_abort_qd_sizes 00:34:48.428 ************************************ 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:48.428 * Looking for test storage... 00:34:48.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:34:48.428 12:47:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:34:56.566 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:56.567 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:56.567 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:56.567 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:56.567 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.567 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.828 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:56.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:34:56.828 00:34:56.828 --- 10.0.0.2 ping statistics --- 00:34:56.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.828 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:34:56.828 12:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:34:56.828 00:34:56.828 --- 10.0.0.1 ping statistics --- 00:34:56.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.828 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:34:56.828 12:47:30 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.828 12:47:30 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:34:56.828 12:47:30 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:56.828 12:47:30 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:01.031 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:01.031 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:02.415 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=656260 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 656260 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 656260 ']' 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:02.676 12:47:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:02.676 [2024-07-25 12:47:35.944961] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:35:02.677 [2024-07-25 12:47:35.945049] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.677 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.677 [2024-07-25 12:47:36.052826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:02.938 [2024-07-25 12:47:36.147998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.938 [2024-07-25 12:47:36.148057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.938 [2024-07-25 12:47:36.148066] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.938 [2024-07-25 12:47:36.148072] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.938 [2024-07-25 12:47:36.148078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.938 [2024-07-25 12:47:36.148211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.938 [2024-07-25 12:47:36.148365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:02.938 [2024-07-25 12:47:36.148517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.938 [2024-07-25 12:47:36.148519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:03.509 12:47:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.509 ************************************ 00:35:03.509 START TEST spdk_target_abort 00:35:03.509 ************************************ 00:35:03.509 12:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:35:03.509 12:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:03.509 12:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:35:03.509 12:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.509 12:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:06.810 spdk_targetn1 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:06.810 [2024-07-25 12:47:39.772635] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.810 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:06.811 [2024-07-25 12:47:39.825079] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:06.811 12:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:06.811 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.110 Initializing NVMe Controllers 00:35:10.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:10.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:10.110 Initialization complete. Launching workers. 00:35:10.110 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6003, failed: 0 00:35:10.110 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1706, failed to submit 4297 00:35:10.110 success 760, unsuccess 946, failed 0 00:35:10.110 12:47:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:10.110 12:47:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:10.110 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.410 Initializing NVMe Controllers 00:35:13.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:13.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:13.410 Initialization complete. Launching workers. 00:35:13.410 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8598, failed: 0 00:35:13.410 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1231, failed to submit 7367 00:35:13.410 success 362, unsuccess 869, failed 0 00:35:13.410 12:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:13.410 12:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:13.410 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.709 Initializing NVMe Controllers 00:35:16.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:16.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:16.709 Initialization complete. Launching workers. 00:35:16.709 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16153, failed: 0 00:35:16.709 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1914, failed to submit 14239 00:35:16.709 success 147, unsuccess 1767, failed 0 00:35:16.709 12:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:16.709 12:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.709 12:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:16.709 12:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.709 12:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:16.709 12:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.709 12:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 656260 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 656260 ']' 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 656260 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 656260 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 656260' 00:35:18.620 killing process with pid 656260 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 656260 00:35:18.620 12:47:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 656260 00:35:18.880 00:35:18.880 real 0m15.123s 00:35:18.881 user 1m0.850s 00:35:18.881 sys 0m1.933s 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:18.881 ************************************ 00:35:18.881 END TEST spdk_target_abort 00:35:18.881 ************************************ 00:35:18.881 12:47:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:35:18.881 12:47:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:18.881 12:47:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:18.881 12:47:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:18.881 12:47:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:18.881 ************************************ 00:35:18.881 START TEST kernel_target_abort 00:35:18.881 ************************************ 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:18.881 12:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:22.232 Waiting for block devices as requested 00:35:22.493 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:22.493 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:22.493 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:22.753 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:22.753 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:22.753 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:23.015 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:23.015 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:23.015 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:35:23.275 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:23.275 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:23.275 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:23.535 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:23.535 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:23.535 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:23.796 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:23.796 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:23.796 No valid GPT data, bailing 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:35:23.796 00:35:23.796 Discovery Log Number of Records 2, Generation counter 2 00:35:23.796 =====Discovery Log Entry 0====== 00:35:23.796 trtype: tcp 00:35:23.796 adrfam: ipv4 00:35:23.796 subtype: current discovery subsystem 00:35:23.796 treq: not specified, sq flow control disable supported 00:35:23.796 portid: 1 00:35:23.796 trsvcid: 4420 00:35:23.796 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:23.796 traddr: 10.0.0.1 00:35:23.796 eflags: none 00:35:23.796 sectype: none 00:35:23.796 =====Discovery Log Entry 1====== 00:35:23.796 trtype: tcp 00:35:23.796 adrfam: ipv4 00:35:23.796 subtype: nvme subsystem 00:35:23.796 treq: not specified, sq flow control disable supported 00:35:23.796 portid: 1 00:35:23.796 trsvcid: 4420 00:35:23.796 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:23.796 traddr: 10.0.0.1 00:35:23.796 eflags: none 00:35:23.796 sectype: none 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:23.796 12:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:24.058 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.358 Initializing NVMe Controllers 00:35:27.358 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:27.358 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:27.358 Initialization complete. Launching workers. 00:35:27.358 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73115, failed: 0 00:35:27.358 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 73115, failed to submit 0 00:35:27.358 success 0, unsuccess 73115, failed 0 00:35:27.358 12:48:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:27.358 12:48:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:27.358 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.656 Initializing NVMe Controllers 00:35:30.656 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:30.656 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:30.656 Initialization complete. Launching workers. 00:35:30.656 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117258, failed: 0 00:35:30.656 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29286, failed to submit 87972 00:35:30.656 success 0, unsuccess 29286, failed 0 00:35:30.656 12:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:30.656 12:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:30.656 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.200 Initializing NVMe Controllers 00:35:33.200 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:33.200 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:33.200 Initialization complete. Launching workers. 00:35:33.200 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 112043, failed: 0 00:35:33.201 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28026, failed to submit 84017 00:35:33.201 success 0, unsuccess 28026, failed 0 00:35:33.201 12:48:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:33.201 12:48:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:33.201 12:48:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:35:33.201 12:48:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:33.201 12:48:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:33.201 12:48:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:33.201 12:48:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:33.201 12:48:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:33.201 12:48:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:33.201 12:48:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:37.409 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:37.409 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:39.323 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:35:39.323 00:35:39.323 real 0m20.248s 00:35:39.323 user 0m9.819s 00:35:39.323 sys 0m6.127s 00:35:39.323 12:48:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:39.323 12:48:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:39.323 ************************************ 00:35:39.323 END TEST kernel_target_abort 00:35:39.323 ************************************ 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:39.323 rmmod nvme_tcp 00:35:39.323 rmmod nvme_fabrics 00:35:39.323 rmmod nvme_keyring 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 656260 ']' 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 656260 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 656260 ']' 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 656260 00:35:39.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (656260) - No such process 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 656260 is not found' 00:35:39.323 Process with pid 656260 is not found 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:39.323 12:48:12 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:43.542 Waiting for block devices as requested 00:35:43.542 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:43.542 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:43.542 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:43.542 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:43.542 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:43.542 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:43.542 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:43.542 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:43.803 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:35:43.803 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:44.064 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:44.064 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:44.064 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:44.324 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:44.324 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:44.324 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:44.585 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:44.585 12:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:44.585 12:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:44.585 12:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:44.585 12:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:44.585 12:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.585 12:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:44.585 12:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.497 12:48:19 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:46.497 00:35:46.497 real 0m58.422s 00:35:46.497 user 1m16.595s 00:35:46.497 sys 0m20.087s 00:35:46.497 12:48:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:46.497 12:48:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:46.497 ************************************ 00:35:46.497 END TEST nvmf_abort_qd_sizes 00:35:46.497 ************************************ 00:35:46.497 12:48:19 -- common/autotest_common.sh@1142 -- # return 0 00:35:46.497 12:48:19 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:46.497 12:48:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:46.497 12:48:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:46.497 12:48:19 -- common/autotest_common.sh@10 -- # set +x 00:35:46.758 ************************************ 00:35:46.758 START TEST keyring_file 00:35:46.758 ************************************ 00:35:46.758 12:48:19 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:46.758 * Looking for test storage... 00:35:46.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:46.758 12:48:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:46.758 12:48:20 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:46.758 12:48:20 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:46.758 12:48:20 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:46.758 12:48:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.758 12:48:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.758 12:48:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.758 12:48:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:46.758 12:48:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@47 -- # : 0 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:46.758 12:48:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:46.758 12:48:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:46.758 12:48:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:46.758 12:48:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:46.758 12:48:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:46.758 12:48:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sThLyXtvFx 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sThLyXtvFx 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sThLyXtvFx 00:35:46.758 12:48:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.sThLyXtvFx 00:35:46.758 12:48:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8WMP3dJ12k 00:35:46.758 12:48:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:46.758 12:48:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:47.019 12:48:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8WMP3dJ12k 00:35:47.019 12:48:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8WMP3dJ12k 00:35:47.020 12:48:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.8WMP3dJ12k 00:35:47.020 12:48:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=667006 00:35:47.020 12:48:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 667006 00:35:47.020 12:48:20 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:47.020 12:48:20 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 667006 ']' 00:35:47.020 12:48:20 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.020 12:48:20 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:47.020 12:48:20 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.020 12:48:20 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:47.020 12:48:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:47.020 [2024-07-25 12:48:20.266152] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:35:47.020 [2024-07-25 12:48:20.266224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667006 ] 00:35:47.020 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.020 [2024-07-25 12:48:20.352609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.280 [2024-07-25 12:48:20.446398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:35:47.853 12:48:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:47.853 [2024-07-25 12:48:21.130892] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.853 null0 00:35:47.853 [2024-07-25 12:48:21.162941] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:47.853 [2024-07-25 12:48:21.163501] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:47.853 [2024-07-25 12:48:21.170947] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.853 12:48:21 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:47.853 [2024-07-25 12:48:21.186984] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:47.853 request: 00:35:47.853 { 00:35:47.853 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:47.853 "secure_channel": false, 00:35:47.853 "listen_address": { 00:35:47.853 "trtype": "tcp", 00:35:47.853 "traddr": "127.0.0.1", 00:35:47.853 "trsvcid": "4420" 00:35:47.853 }, 00:35:47.853 "method": "nvmf_subsystem_add_listener", 00:35:47.853 "req_id": 1 00:35:47.853 } 00:35:47.853 Got JSON-RPC error response 00:35:47.853 response: 00:35:47.853 { 00:35:47.853 "code": -32602, 00:35:47.853 "message": "Invalid parameters" 00:35:47.853 } 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:47.853 12:48:21 keyring_file -- keyring/file.sh@46 -- # bperfpid=667293 00:35:47.853 12:48:21 keyring_file -- keyring/file.sh@48 -- # waitforlisten 667293 /var/tmp/bperf.sock 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 667293 ']' 00:35:47.853 12:48:21 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:47.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:47.853 12:48:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:47.853 [2024-07-25 12:48:21.244602] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:35:47.853 [2024-07-25 12:48:21.244664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667293 ] 00:35:48.114 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.114 [2024-07-25 12:48:21.326732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.114 [2024-07-25 12:48:21.436363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.055 12:48:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:49.055 12:48:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:35:49.055 12:48:22 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sThLyXtvFx 00:35:49.055 12:48:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sThLyXtvFx 00:35:49.055 12:48:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8WMP3dJ12k 00:35:49.055 12:48:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8WMP3dJ12k 00:35:49.315 12:48:22 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:35:49.315 12:48:22 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:35:49.315 12:48:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.315 12:48:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.315 12:48:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:49.576 12:48:22 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.sThLyXtvFx == \/\t\m\p\/\t\m\p\.\s\T\h\L\y\X\t\v\F\x ]] 00:35:49.576 12:48:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:49.576 12:48:22 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:35:49.576 12:48:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.576 12:48:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.576 12:48:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:49.576 12:48:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.8WMP3dJ12k == \/\t\m\p\/\t\m\p\.\8\W\M\P\3\d\J\1\2\k ]] 00:35:49.576 12:48:22 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:35:49.576 12:48:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:49.576 12:48:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.576 12:48:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.576 12:48:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.576 12:48:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:49.836 12:48:23 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:35:49.836 12:48:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:35:49.836 12:48:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:49.836 12:48:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.836 12:48:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.836 12:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.836 12:48:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:50.096 12:48:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:50.096 12:48:23 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.096 12:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.355 [2024-07-25 12:48:23.585654] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:50.355 nvme0n1 00:35:50.355 12:48:23 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:35:50.355 12:48:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:50.355 12:48:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:50.355 12:48:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.355 12:48:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:50.355 12:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.615 12:48:23 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:35:50.615 12:48:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:35:50.615 12:48:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:50.615 12:48:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:50.615 12:48:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.615 12:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.615 12:48:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:50.875 12:48:24 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:35:50.875 12:48:24 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:50.875 Running I/O for 1 seconds... 00:35:51.816 00:35:51.816 Latency(us) 00:35:51.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.816 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:51.816 nvme0n1 : 1.00 12773.83 49.90 0.00 0.00 9989.79 4713.55 19963.27 00:35:51.816 =================================================================================================================== 00:35:51.816 Total : 12773.83 49.90 0.00 0.00 9989.79 4713.55 19963.27 00:35:51.816 0 00:35:52.076 12:48:25 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:52.076 12:48:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:52.076 12:48:25 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:35:52.076 12:48:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.076 12:48:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:52.076 12:48:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.076 12:48:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.076 12:48:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.337 12:48:25 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:35:52.337 12:48:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:35:52.337 12:48:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:52.337 12:48:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.337 12:48:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.337 12:48:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.337 12:48:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:52.598 12:48:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:52.598 12:48:25 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:52.598 12:48:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:52.598 12:48:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:52.598 12:48:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:35:52.598 12:48:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:52.598 12:48:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:35:52.598 12:48:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:52.598 12:48:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:52.598 12:48:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:52.598 [2024-07-25 12:48:25.998610] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:52.598 [2024-07-25 12:48:25.998654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdacc70 (107): Transport endpoint is not connected 00:35:52.598 [2024-07-25 12:48:25.999647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdacc70 (9): Bad file descriptor 00:35:52.598 [2024-07-25 12:48:26.000648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:52.598 [2024-07-25 12:48:26.000663] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:52.598 [2024-07-25 12:48:26.000674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:52.598 request: 00:35:52.598 { 00:35:52.598 "name": "nvme0", 00:35:52.598 "trtype": "tcp", 00:35:52.598 "traddr": "127.0.0.1", 00:35:52.598 "adrfam": "ipv4", 00:35:52.598 "trsvcid": "4420", 00:35:52.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:52.598 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:52.598 "prchk_reftag": false, 00:35:52.598 "prchk_guard": false, 00:35:52.598 "hdgst": false, 00:35:52.598 "ddgst": false, 00:35:52.598 "psk": "key1", 00:35:52.598 "method": "bdev_nvme_attach_controller", 00:35:52.598 "req_id": 1 00:35:52.598 } 00:35:52.598 Got JSON-RPC error response 00:35:52.598 response: 00:35:52.598 { 00:35:52.598 "code": -5, 00:35:52.598 "message": "Input/output error" 00:35:52.598 } 00:35:52.859 12:48:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:52.859 12:48:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:52.859 12:48:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:52.859 12:48:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:52.859 12:48:26 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:35:52.859 12:48:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:52.859 12:48:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.859 12:48:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.859 12:48:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.859 12:48:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.859 12:48:26 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:35:52.859 12:48:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:35:52.859 12:48:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:52.859 12:48:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.859 12:48:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.859 12:48:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.859 12:48:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:53.119 12:48:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:53.119 12:48:26 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:35:53.119 12:48:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:53.379 12:48:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:35:53.379 12:48:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:53.638 12:48:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:35:53.638 12:48:26 keyring_file -- keyring/file.sh@77 -- # jq length 00:35:53.638 12:48:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.638 12:48:27 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:35:53.638 12:48:27 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.sThLyXtvFx 00:35:53.638 12:48:27 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.sThLyXtvFx 00:35:53.638 12:48:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:53.638 12:48:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.sThLyXtvFx 00:35:53.638 12:48:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:35:53.638 12:48:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:53.638 12:48:27 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:35:53.638 12:48:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:53.638 12:48:27 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sThLyXtvFx 00:35:53.638 12:48:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sThLyXtvFx 00:35:53.897 [2024-07-25 12:48:27.215899] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.sThLyXtvFx': 0100660 00:35:53.897 [2024-07-25 12:48:27.215927] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:53.897 request: 00:35:53.897 { 00:35:53.897 "name": "key0", 00:35:53.897 "path": "/tmp/tmp.sThLyXtvFx", 00:35:53.897 "method": "keyring_file_add_key", 00:35:53.897 "req_id": 1 00:35:53.897 } 00:35:53.897 Got JSON-RPC error response 00:35:53.897 response: 00:35:53.897 { 00:35:53.897 "code": -1, 00:35:53.897 "message": "Operation not permitted" 00:35:53.897 } 00:35:53.897 12:48:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:53.897 12:48:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:53.897 12:48:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:53.897 12:48:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:53.897 12:48:27 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.sThLyXtvFx 00:35:53.897 12:48:27 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sThLyXtvFx 00:35:53.897 12:48:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sThLyXtvFx 00:35:54.156 12:48:27 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.sThLyXtvFx 00:35:54.156 12:48:27 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:35:54.156 12:48:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:54.157 12:48:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:54.157 12:48:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:54.157 12:48:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:54.157 12:48:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.416 12:48:27 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:35:54.416 12:48:27 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:54.416 12:48:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:54.416 12:48:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:54.416 12:48:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:35:54.416 12:48:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:54.416 12:48:27 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:35:54.416 12:48:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:54.416 12:48:27 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:54.416 12:48:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:54.416 [2024-07-25 12:48:27.809492] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.sThLyXtvFx': No such file or directory 00:35:54.416 [2024-07-25 12:48:27.809521] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:54.416 [2024-07-25 12:48:27.809558] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:54.416 [2024-07-25 12:48:27.809568] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:54.416 [2024-07-25 12:48:27.809577] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:54.416 request: 00:35:54.416 { 00:35:54.416 "name": "nvme0", 00:35:54.416 "trtype": "tcp", 00:35:54.416 "traddr": "127.0.0.1", 00:35:54.416 "adrfam": "ipv4", 00:35:54.416 "trsvcid": "4420", 00:35:54.416 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.416 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.416 "prchk_reftag": false, 00:35:54.416 "prchk_guard": false, 00:35:54.416 "hdgst": false, 00:35:54.416 "ddgst": false, 00:35:54.416 "psk": "key0", 00:35:54.416 "method": "bdev_nvme_attach_controller", 00:35:54.416 "req_id": 1 00:35:54.416 } 00:35:54.416 Got JSON-RPC error response 00:35:54.416 response: 00:35:54.416 { 00:35:54.416 "code": -19, 00:35:54.416 "message": "No such device" 00:35:54.416 } 00:35:54.416 12:48:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:54.416 12:48:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:54.416 12:48:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:54.417 12:48:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:54.417 12:48:27 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:35:54.417 12:48:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:54.713 12:48:28 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:54.713 12:48:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:54.713 12:48:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:54.713 12:48:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:54.713 12:48:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:54.714 12:48:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:54.714 12:48:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9bWiZXLmdw 00:35:54.714 12:48:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:54.714 12:48:28 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:54.714 12:48:28 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:54.714 12:48:28 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:54.714 12:48:28 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:54.714 12:48:28 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:54.714 12:48:28 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:54.714 12:48:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9bWiZXLmdw 00:35:54.714 12:48:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9bWiZXLmdw 00:35:54.714 12:48:28 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.9bWiZXLmdw 00:35:54.714 12:48:28 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9bWiZXLmdw 00:35:54.714 12:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9bWiZXLmdw 00:35:55.020 12:48:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:55.020 12:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:55.281 nvme0n1 00:35:55.281 12:48:28 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:35:55.281 12:48:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.281 12:48:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:55.281 12:48:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.281 12:48:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:55.281 12:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.542 12:48:28 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:35:55.542 12:48:28 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:35:55.542 12:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:55.802 12:48:28 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:35:55.802 12:48:28 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:35:55.802 12:48:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.802 12:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.802 12:48:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:55.802 12:48:29 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:35:55.802 12:48:29 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:35:55.802 12:48:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:55.802 12:48:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.802 12:48:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.802 12:48:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:55.802 12:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:56.061 12:48:29 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:35:56.062 12:48:29 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:56.062 12:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:56.322 12:48:29 keyring_file -- keyring/file.sh@104 -- # jq length 00:35:56.322 12:48:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:35:56.322 12:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:56.583 12:48:29 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:35:56.583 12:48:29 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9bWiZXLmdw 00:35:56.583 12:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9bWiZXLmdw 00:35:56.583 12:48:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8WMP3dJ12k 00:35:56.583 12:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8WMP3dJ12k 00:35:56.843 12:48:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:56.843 12:48:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:57.104 nvme0n1 00:35:57.104 12:48:30 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:35:57.104 12:48:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:57.364 12:48:30 keyring_file -- keyring/file.sh@112 -- # config='{ 00:35:57.364 "subsystems": [ 00:35:57.364 { 00:35:57.364 "subsystem": "keyring", 00:35:57.364 "config": [ 00:35:57.364 { 00:35:57.364 "method": "keyring_file_add_key", 00:35:57.364 "params": { 00:35:57.364 "name": "key0", 00:35:57.364 "path": "/tmp/tmp.9bWiZXLmdw" 00:35:57.364 } 00:35:57.364 }, 00:35:57.364 { 00:35:57.364 "method": "keyring_file_add_key", 00:35:57.364 "params": { 00:35:57.364 "name": "key1", 00:35:57.364 "path": "/tmp/tmp.8WMP3dJ12k" 00:35:57.364 } 00:35:57.364 } 00:35:57.364 ] 00:35:57.364 }, 00:35:57.364 { 00:35:57.364 "subsystem": "iobuf", 00:35:57.364 "config": [ 00:35:57.364 { 00:35:57.364 "method": "iobuf_set_options", 00:35:57.364 "params": { 00:35:57.364 "small_pool_count": 8192, 00:35:57.364 "large_pool_count": 1024, 00:35:57.364 "small_bufsize": 8192, 00:35:57.364 "large_bufsize": 135168 00:35:57.364 } 00:35:57.364 } 00:35:57.364 ] 00:35:57.364 }, 00:35:57.364 { 00:35:57.364 "subsystem": "sock", 00:35:57.364 "config": [ 00:35:57.365 { 00:35:57.365 "method": "sock_set_default_impl", 00:35:57.365 "params": { 00:35:57.365 "impl_name": "posix" 00:35:57.365 } 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "method": "sock_impl_set_options", 00:35:57.365 "params": { 00:35:57.365 "impl_name": "ssl", 00:35:57.365 "recv_buf_size": 4096, 00:35:57.365 "send_buf_size": 4096, 00:35:57.365 "enable_recv_pipe": true, 00:35:57.365 "enable_quickack": false, 00:35:57.365 "enable_placement_id": 0, 00:35:57.365 "enable_zerocopy_send_server": true, 00:35:57.365 "enable_zerocopy_send_client": false, 00:35:57.365 "zerocopy_threshold": 0, 00:35:57.365 "tls_version": 0, 00:35:57.365 "enable_ktls": false 00:35:57.365 } 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "method": "sock_impl_set_options", 00:35:57.365 "params": { 00:35:57.365 "impl_name": "posix", 00:35:57.365 "recv_buf_size": 2097152, 00:35:57.365 "send_buf_size": 2097152, 00:35:57.365 "enable_recv_pipe": true, 00:35:57.365 "enable_quickack": false, 00:35:57.365 "enable_placement_id": 0, 00:35:57.365 "enable_zerocopy_send_server": true, 00:35:57.365 "enable_zerocopy_send_client": false, 00:35:57.365 "zerocopy_threshold": 0, 00:35:57.365 "tls_version": 0, 00:35:57.365 "enable_ktls": false 00:35:57.365 } 00:35:57.365 } 00:35:57.365 ] 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "subsystem": "vmd", 00:35:57.365 "config": [] 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "subsystem": "accel", 00:35:57.365 "config": [ 00:35:57.365 { 00:35:57.365 "method": "accel_set_options", 00:35:57.365 "params": { 00:35:57.365 "small_cache_size": 128, 00:35:57.365 "large_cache_size": 16, 00:35:57.365 "task_count": 2048, 00:35:57.365 "sequence_count": 2048, 00:35:57.365 "buf_count": 2048 00:35:57.365 } 00:35:57.365 } 00:35:57.365 ] 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "subsystem": "bdev", 00:35:57.365 "config": [ 00:35:57.365 { 00:35:57.365 "method": "bdev_set_options", 00:35:57.365 "params": { 00:35:57.365 "bdev_io_pool_size": 65535, 00:35:57.365 "bdev_io_cache_size": 256, 00:35:57.365 "bdev_auto_examine": true, 00:35:57.365 "iobuf_small_cache_size": 128, 00:35:57.365 "iobuf_large_cache_size": 16 00:35:57.365 } 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "method": "bdev_raid_set_options", 00:35:57.365 "params": { 00:35:57.365 "process_window_size_kb": 1024, 00:35:57.365 "process_max_bandwidth_mb_sec": 0 00:35:57.365 } 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "method": "bdev_iscsi_set_options", 00:35:57.365 "params": { 00:35:57.365 "timeout_sec": 30 00:35:57.365 } 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "method": "bdev_nvme_set_options", 00:35:57.365 "params": { 00:35:57.365 "action_on_timeout": "none", 00:35:57.365 "timeout_us": 0, 00:35:57.365 "timeout_admin_us": 0, 00:35:57.365 "keep_alive_timeout_ms": 10000, 00:35:57.365 "arbitration_burst": 0, 00:35:57.365 "low_priority_weight": 0, 00:35:57.365 "medium_priority_weight": 0, 00:35:57.365 "high_priority_weight": 0, 00:35:57.365 "nvme_adminq_poll_period_us": 10000, 00:35:57.365 "nvme_ioq_poll_period_us": 0, 00:35:57.365 "io_queue_requests": 512, 00:35:57.365 "delay_cmd_submit": true, 00:35:57.365 "transport_retry_count": 4, 00:35:57.365 "bdev_retry_count": 3, 00:35:57.365 "transport_ack_timeout": 0, 00:35:57.365 "ctrlr_loss_timeout_sec": 0, 00:35:57.365 "reconnect_delay_sec": 0, 00:35:57.365 "fast_io_fail_timeout_sec": 0, 00:35:57.365 "disable_auto_failback": false, 00:35:57.365 "generate_uuids": false, 00:35:57.365 "transport_tos": 0, 00:35:57.365 "nvme_error_stat": false, 00:35:57.365 "rdma_srq_size": 0, 00:35:57.365 "io_path_stat": false, 00:35:57.365 "allow_accel_sequence": false, 00:35:57.365 "rdma_max_cq_size": 0, 00:35:57.365 "rdma_cm_event_timeout_ms": 0, 00:35:57.365 "dhchap_digests": [ 00:35:57.365 "sha256", 00:35:57.365 "sha384", 00:35:57.365 "sha512" 00:35:57.365 ], 00:35:57.365 "dhchap_dhgroups": [ 00:35:57.365 "null", 00:35:57.365 "ffdhe2048", 00:35:57.365 "ffdhe3072", 00:35:57.365 "ffdhe4096", 00:35:57.365 "ffdhe6144", 00:35:57.365 "ffdhe8192" 00:35:57.365 ] 00:35:57.365 } 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "method": "bdev_nvme_attach_controller", 00:35:57.365 "params": { 00:35:57.365 "name": "nvme0", 00:35:57.365 "trtype": "TCP", 00:35:57.365 "adrfam": "IPv4", 00:35:57.365 "traddr": "127.0.0.1", 00:35:57.365 "trsvcid": "4420", 00:35:57.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.365 "prchk_reftag": false, 00:35:57.365 "prchk_guard": false, 00:35:57.365 "ctrlr_loss_timeout_sec": 0, 00:35:57.365 "reconnect_delay_sec": 0, 00:35:57.365 "fast_io_fail_timeout_sec": 0, 00:35:57.365 "psk": "key0", 00:35:57.365 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:57.365 "hdgst": false, 00:35:57.365 "ddgst": false 00:35:57.365 } 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "method": "bdev_nvme_set_hotplug", 00:35:57.365 "params": { 00:35:57.365 "period_us": 100000, 00:35:57.365 "enable": false 00:35:57.365 } 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "method": "bdev_wait_for_examine" 00:35:57.365 } 00:35:57.365 ] 00:35:57.365 }, 00:35:57.365 { 00:35:57.365 "subsystem": "nbd", 00:35:57.365 "config": [] 00:35:57.365 } 00:35:57.365 ] 00:35:57.365 }' 00:35:57.365 12:48:30 keyring_file -- keyring/file.sh@114 -- # killprocess 667293 00:35:57.365 12:48:30 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 667293 ']' 00:35:57.365 12:48:30 keyring_file -- common/autotest_common.sh@952 -- # kill -0 667293 00:35:57.365 12:48:30 keyring_file -- common/autotest_common.sh@953 -- # uname 00:35:57.365 12:48:30 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:57.365 12:48:30 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 667293 00:35:57.365 12:48:30 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:57.365 12:48:30 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:57.365 12:48:30 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 667293' 00:35:57.365 killing process with pid 667293 00:35:57.365 12:48:30 keyring_file -- common/autotest_common.sh@967 -- # kill 667293 00:35:57.365 Received shutdown signal, test time was about 1.000000 seconds 00:35:57.365 00:35:57.365 Latency(us) 00:35:57.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.365 =================================================================================================================== 00:35:57.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:57.365 12:48:30 keyring_file -- common/autotest_common.sh@972 -- # wait 667293 00:35:57.626 12:48:30 keyring_file -- keyring/file.sh@117 -- # bperfpid=668957 00:35:57.626 12:48:30 keyring_file -- keyring/file.sh@119 -- # waitforlisten 668957 /var/tmp/bperf.sock 00:35:57.626 12:48:30 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 668957 ']' 00:35:57.626 12:48:30 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:57.626 12:48:30 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:57.626 12:48:30 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:57.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:57.626 12:48:30 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:57.626 12:48:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.626 12:48:30 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:57.626 12:48:30 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:35:57.626 "subsystems": [ 00:35:57.626 { 00:35:57.626 "subsystem": "keyring", 00:35:57.626 "config": [ 00:35:57.626 { 00:35:57.626 "method": "keyring_file_add_key", 00:35:57.626 "params": { 00:35:57.626 "name": "key0", 00:35:57.626 "path": "/tmp/tmp.9bWiZXLmdw" 00:35:57.626 } 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "method": "keyring_file_add_key", 00:35:57.626 "params": { 00:35:57.626 "name": "key1", 00:35:57.626 "path": "/tmp/tmp.8WMP3dJ12k" 00:35:57.626 } 00:35:57.626 } 00:35:57.626 ] 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "subsystem": "iobuf", 00:35:57.626 "config": [ 00:35:57.626 { 00:35:57.626 "method": "iobuf_set_options", 00:35:57.626 "params": { 00:35:57.626 "small_pool_count": 8192, 00:35:57.626 "large_pool_count": 1024, 00:35:57.626 "small_bufsize": 8192, 00:35:57.626 "large_bufsize": 135168 00:35:57.626 } 00:35:57.626 } 00:35:57.626 ] 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "subsystem": "sock", 00:35:57.626 "config": [ 00:35:57.626 { 00:35:57.626 "method": "sock_set_default_impl", 00:35:57.626 "params": { 00:35:57.626 "impl_name": "posix" 00:35:57.626 } 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "method": "sock_impl_set_options", 00:35:57.626 "params": { 00:35:57.626 "impl_name": "ssl", 00:35:57.626 "recv_buf_size": 4096, 00:35:57.626 "send_buf_size": 4096, 00:35:57.626 "enable_recv_pipe": true, 00:35:57.626 "enable_quickack": false, 00:35:57.626 "enable_placement_id": 0, 00:35:57.626 "enable_zerocopy_send_server": true, 00:35:57.626 "enable_zerocopy_send_client": false, 00:35:57.626 "zerocopy_threshold": 0, 00:35:57.626 "tls_version": 0, 00:35:57.626 "enable_ktls": false 00:35:57.626 } 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "method": "sock_impl_set_options", 00:35:57.626 "params": { 00:35:57.626 "impl_name": "posix", 00:35:57.626 "recv_buf_size": 2097152, 00:35:57.626 "send_buf_size": 2097152, 00:35:57.626 "enable_recv_pipe": true, 00:35:57.626 "enable_quickack": false, 00:35:57.626 "enable_placement_id": 0, 00:35:57.626 "enable_zerocopy_send_server": true, 00:35:57.626 "enable_zerocopy_send_client": false, 00:35:57.626 "zerocopy_threshold": 0, 00:35:57.626 "tls_version": 0, 00:35:57.626 "enable_ktls": false 00:35:57.626 } 00:35:57.626 } 00:35:57.626 ] 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "subsystem": "vmd", 00:35:57.626 "config": [] 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "subsystem": "accel", 00:35:57.626 "config": [ 00:35:57.626 { 00:35:57.626 "method": "accel_set_options", 00:35:57.626 "params": { 00:35:57.626 "small_cache_size": 128, 00:35:57.626 "large_cache_size": 16, 00:35:57.626 "task_count": 2048, 00:35:57.626 "sequence_count": 2048, 00:35:57.626 "buf_count": 2048 00:35:57.626 } 00:35:57.626 } 00:35:57.626 ] 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "subsystem": "bdev", 00:35:57.626 "config": [ 00:35:57.626 { 00:35:57.626 "method": "bdev_set_options", 00:35:57.626 "params": { 00:35:57.626 "bdev_io_pool_size": 65535, 00:35:57.626 "bdev_io_cache_size": 256, 00:35:57.626 "bdev_auto_examine": true, 00:35:57.626 "iobuf_small_cache_size": 128, 00:35:57.626 "iobuf_large_cache_size": 16 00:35:57.626 } 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "method": "bdev_raid_set_options", 00:35:57.626 "params": { 00:35:57.626 "process_window_size_kb": 1024, 00:35:57.626 "process_max_bandwidth_mb_sec": 0 00:35:57.626 } 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "method": "bdev_iscsi_set_options", 00:35:57.626 "params": { 00:35:57.626 "timeout_sec": 30 00:35:57.626 } 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "method": "bdev_nvme_set_options", 00:35:57.626 "params": { 00:35:57.626 "action_on_timeout": "none", 00:35:57.626 "timeout_us": 0, 00:35:57.626 "timeout_admin_us": 0, 00:35:57.626 "keep_alive_timeout_ms": 10000, 00:35:57.626 "arbitration_burst": 0, 00:35:57.626 "low_priority_weight": 0, 00:35:57.626 "medium_priority_weight": 0, 00:35:57.626 "high_priority_weight": 0, 00:35:57.626 "nvme_adminq_poll_period_us": 10000, 00:35:57.626 "nvme_ioq_poll_period_us": 0, 00:35:57.626 "io_queue_requests": 512, 00:35:57.626 "delay_cmd_submit": true, 00:35:57.626 "transport_retry_count": 4, 00:35:57.626 "bdev_retry_count": 3, 00:35:57.626 "transport_ack_timeout": 0, 00:35:57.626 "ctrlr_loss_timeout_sec": 0, 00:35:57.626 "reconnect_delay_sec": 0, 00:35:57.626 "fast_io_fail_timeout_sec": 0, 00:35:57.626 "disable_auto_failback": false, 00:35:57.626 "generate_uuids": false, 00:35:57.626 "transport_tos": 0, 00:35:57.626 "nvme_error_stat": false, 00:35:57.626 "rdma_srq_size": 0, 00:35:57.626 "io_path_stat": false, 00:35:57.626 "allow_accel_sequence": false, 00:35:57.626 "rdma_max_cq_size": 0, 00:35:57.626 "rdma_cm_event_timeout_ms": 0, 00:35:57.626 "dhchap_digests": [ 00:35:57.626 "sha256", 00:35:57.626 "sha384", 00:35:57.626 "sha512" 00:35:57.626 ], 00:35:57.626 "dhchap_dhgroups": [ 00:35:57.626 "null", 00:35:57.626 "ffdhe2048", 00:35:57.626 "ffdhe3072", 00:35:57.626 "ffdhe4096", 00:35:57.626 "ffdhe6144", 00:35:57.626 "ffdhe8192" 00:35:57.626 ] 00:35:57.626 } 00:35:57.626 }, 00:35:57.626 { 00:35:57.626 "method": "bdev_nvme_attach_controller", 00:35:57.626 "params": { 00:35:57.626 "name": "nvme0", 00:35:57.626 "trtype": "TCP", 00:35:57.626 "adrfam": "IPv4", 00:35:57.626 "traddr": "127.0.0.1", 00:35:57.627 "trsvcid": "4420", 00:35:57.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.627 "prchk_reftag": false, 00:35:57.627 "prchk_guard": false, 00:35:57.627 "ctrlr_loss_timeout_sec": 0, 00:35:57.627 "reconnect_delay_sec": 0, 00:35:57.627 "fast_io_fail_timeout_sec": 0, 00:35:57.627 "psk": "key0", 00:35:57.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:57.627 "hdgst": false, 00:35:57.627 "ddgst": false 00:35:57.627 } 00:35:57.627 }, 00:35:57.627 { 00:35:57.627 "method": "bdev_nvme_set_hotplug", 00:35:57.627 "params": { 00:35:57.627 "period_us": 100000, 00:35:57.627 "enable": false 00:35:57.627 } 00:35:57.627 }, 00:35:57.627 { 00:35:57.627 "method": "bdev_wait_for_examine" 00:35:57.627 } 00:35:57.627 ] 00:35:57.627 }, 00:35:57.627 { 00:35:57.627 "subsystem": "nbd", 00:35:57.627 "config": [] 00:35:57.627 } 00:35:57.627 ] 00:35:57.627 }' 00:35:57.627 [2024-07-25 12:48:30.895056] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:35:57.627 [2024-07-25 12:48:30.895107] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668957 ] 00:35:57.627 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.627 [2024-07-25 12:48:30.971404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.887 [2024-07-25 12:48:31.050789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.887 [2024-07-25 12:48:31.202847] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:58.456 12:48:31 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:58.456 12:48:31 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:35:58.456 12:48:31 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:35:58.456 12:48:31 keyring_file -- keyring/file.sh@120 -- # jq length 00:35:58.456 12:48:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.456 12:48:31 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:35:58.456 12:48:31 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:35:58.456 12:48:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:58.456 12:48:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:58.456 12:48:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.456 12:48:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:58.456 12:48:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.716 12:48:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:58.716 12:48:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:35:58.716 12:48:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:58.716 12:48:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:58.716 12:48:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.716 12:48:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:58.716 12:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.975 12:48:32 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:35:58.975 12:48:32 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:35:58.975 12:48:32 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:35:58.975 12:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:59.235 12:48:32 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:35:59.235 12:48:32 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:59.235 12:48:32 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.9bWiZXLmdw /tmp/tmp.8WMP3dJ12k 00:35:59.235 12:48:32 keyring_file -- keyring/file.sh@20 -- # killprocess 668957 00:35:59.235 12:48:32 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 668957 ']' 00:35:59.235 12:48:32 keyring_file -- common/autotest_common.sh@952 -- # kill -0 668957 00:35:59.235 12:48:32 keyring_file -- common/autotest_common.sh@953 -- # uname 00:35:59.235 12:48:32 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:59.235 12:48:32 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668957 00:35:59.235 12:48:32 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:59.235 12:48:32 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:59.235 12:48:32 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668957' 00:35:59.235 killing process with pid 668957 00:35:59.235 12:48:32 keyring_file -- common/autotest_common.sh@967 -- # kill 668957 00:35:59.235 Received shutdown signal, test time was about 1.000000 seconds 00:35:59.235 00:35:59.235 Latency(us) 00:35:59.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.235 =================================================================================================================== 00:35:59.235 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:59.235 12:48:32 keyring_file -- common/autotest_common.sh@972 -- # wait 668957 00:35:59.496 12:48:32 keyring_file -- keyring/file.sh@21 -- # killprocess 667006 00:35:59.496 12:48:32 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 667006 ']' 00:35:59.496 12:48:32 keyring_file -- common/autotest_common.sh@952 -- # kill -0 667006 00:35:59.496 12:48:32 keyring_file -- common/autotest_common.sh@953 -- # uname 00:35:59.496 12:48:32 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:59.496 12:48:32 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 667006 00:35:59.496 12:48:32 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:59.496 12:48:32 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:59.496 12:48:32 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 667006' 00:35:59.496 killing process with pid 667006 00:35:59.496 12:48:32 keyring_file -- common/autotest_common.sh@967 -- # kill 667006 00:35:59.496 [2024-07-25 12:48:32.716990] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:59.496 12:48:32 keyring_file -- common/autotest_common.sh@972 -- # wait 667006 00:35:59.758 00:35:59.758 real 0m12.979s 00:35:59.758 user 0m31.681s 00:35:59.758 sys 0m2.818s 00:35:59.758 12:48:32 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:59.758 12:48:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:59.758 ************************************ 00:35:59.758 END TEST keyring_file 00:35:59.758 ************************************ 00:35:59.758 12:48:32 -- common/autotest_common.sh@1142 -- # return 0 00:35:59.758 12:48:32 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:35:59.758 12:48:32 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:59.758 12:48:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:59.758 12:48:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:59.758 12:48:32 -- common/autotest_common.sh@10 -- # set +x 00:35:59.758 ************************************ 00:35:59.758 START TEST keyring_linux 00:35:59.758 ************************************ 00:35:59.758 12:48:33 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:59.758 * Looking for test storage... 00:35:59.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:59.758 12:48:33 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:59.758 12:48:33 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:59.758 12:48:33 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.758 12:48:33 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.758 12:48:33 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.758 12:48:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.758 12:48:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.758 12:48:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.758 12:48:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:59.758 12:48:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:59.758 12:48:33 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:59.759 12:48:33 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:59.759 12:48:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:59.759 12:48:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:59.759 12:48:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:59.759 12:48:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:59.759 12:48:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:59.759 12:48:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:59.759 12:48:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:59.759 12:48:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:59.759 12:48:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:59.759 12:48:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:59.759 12:48:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:59.759 12:48:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:59.759 12:48:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:59.759 12:48:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:59.759 12:48:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:35:59.759 12:48:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:59.759 12:48:33 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:59.759 12:48:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:35:59.759 12:48:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:00.020 12:48:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:00.020 12:48:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:00.020 /tmp/:spdk-test:key0 00:36:00.020 12:48:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:00.020 12:48:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:00.020 12:48:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:00.020 12:48:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:00.020 12:48:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:00.020 12:48:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:00.020 12:48:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:00.020 12:48:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:00.020 12:48:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:00.020 12:48:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:00.020 12:48:33 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:00.020 12:48:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:00.020 12:48:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:00.020 12:48:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:00.020 12:48:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:00.020 /tmp/:spdk-test:key1 00:36:00.020 12:48:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=669353 00:36:00.020 12:48:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 669353 00:36:00.020 12:48:33 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:00.020 12:48:33 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 669353 ']' 00:36:00.020 12:48:33 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.020 12:48:33 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:00.020 12:48:33 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.020 12:48:33 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:00.020 12:48:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:00.020 [2024-07-25 12:48:33.299069] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:36:00.020 [2024-07-25 12:48:33.299121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669353 ] 00:36:00.020 EAL: No free 2048 kB hugepages reported on node 1 00:36:00.020 [2024-07-25 12:48:33.380180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.281 [2024-07-25 12:48:33.444803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.852 12:48:34 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:00.852 12:48:34 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:00.852 12:48:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:00.852 12:48:34 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.852 12:48:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:00.852 [2024-07-25 12:48:34.132107] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:00.852 null0 00:36:00.852 [2024-07-25 12:48:34.164160] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:00.852 [2024-07-25 12:48:34.164653] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:00.852 12:48:34 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.852 12:48:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:00.852 243885115 00:36:00.852 12:48:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:00.852 712555340 00:36:00.852 12:48:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=669620 00:36:00.852 12:48:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 669620 /var/tmp/bperf.sock 00:36:00.852 12:48:34 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:00.852 12:48:34 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 669620 ']' 00:36:00.852 12:48:34 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:00.852 12:48:34 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:00.852 12:48:34 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:00.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:00.852 12:48:34 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:00.852 12:48:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:00.852 [2024-07-25 12:48:34.246277] Starting SPDK v24.09-pre git sha1 8fdaab4b1 / DPDK 24.03.0 initialization... 00:36:00.852 [2024-07-25 12:48:34.246325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669620 ] 00:36:01.112 EAL: No free 2048 kB hugepages reported on node 1 00:36:01.112 [2024-07-25 12:48:34.321889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.112 [2024-07-25 12:48:34.400492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.683 12:48:35 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:01.683 12:48:35 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:01.683 12:48:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:01.683 12:48:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:01.943 12:48:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:01.943 12:48:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:02.203 12:48:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:02.203 12:48:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:02.464 [2024-07-25 12:48:35.680850] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:02.464 nvme0n1 00:36:02.464 12:48:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:02.464 12:48:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:02.464 12:48:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:02.464 12:48:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:02.464 12:48:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:02.464 12:48:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.724 12:48:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:02.724 12:48:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:02.724 12:48:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:02.724 12:48:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:02.724 12:48:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:02.724 12:48:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.724 12:48:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:02.985 12:48:36 keyring_linux -- keyring/linux.sh@25 -- # sn=243885115 00:36:02.985 12:48:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:02.985 12:48:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:02.985 12:48:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 243885115 == \2\4\3\8\8\5\1\1\5 ]] 00:36:02.985 12:48:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 243885115 00:36:02.985 12:48:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:02.985 12:48:36 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:02.985 Running I/O for 1 seconds... 00:36:03.927 00:36:03.927 Latency(us) 00:36:03.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.927 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:03.927 nvme0n1 : 1.01 13734.08 53.65 0.00 0.00 9285.09 5898.24 18450.90 00:36:03.927 =================================================================================================================== 00:36:03.927 Total : 13734.08 53.65 0.00 0.00 9285.09 5898.24 18450.90 00:36:03.927 0 00:36:03.927 12:48:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:03.927 12:48:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:04.188 12:48:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:04.188 12:48:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:04.188 12:48:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:04.188 12:48:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:04.188 12:48:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:04.188 12:48:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:04.448 12:48:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:04.448 12:48:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:04.448 12:48:37 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:04.448 12:48:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:04.448 12:48:37 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:04.448 12:48:37 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:04.449 12:48:37 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:04.449 12:48:37 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:04.449 12:48:37 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:04.449 12:48:37 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:04.449 12:48:37 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:04.449 12:48:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:04.711 [2024-07-25 12:48:37.876175] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:04.711 [2024-07-25 12:48:37.877002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1769be0 (107): Transport endpoint is not connected 00:36:04.711 [2024-07-25 12:48:37.877994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1769be0 (9): Bad file descriptor 00:36:04.711 [2024-07-25 12:48:37.878996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:04.711 [2024-07-25 12:48:37.879011] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:04.711 [2024-07-25 12:48:37.879022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:04.711 request: 00:36:04.711 { 00:36:04.711 "name": "nvme0", 00:36:04.711 "trtype": "tcp", 00:36:04.711 "traddr": "127.0.0.1", 00:36:04.711 "adrfam": "ipv4", 00:36:04.711 "trsvcid": "4420", 00:36:04.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:04.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:04.711 "prchk_reftag": false, 00:36:04.711 "prchk_guard": false, 00:36:04.711 "hdgst": false, 00:36:04.711 "ddgst": false, 00:36:04.711 "psk": ":spdk-test:key1", 00:36:04.711 "method": "bdev_nvme_attach_controller", 00:36:04.711 "req_id": 1 00:36:04.711 } 00:36:04.711 Got JSON-RPC error response 00:36:04.711 response: 00:36:04.711 { 00:36:04.711 "code": -5, 00:36:04.711 "message": "Input/output error" 00:36:04.711 } 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@33 -- # sn=243885115 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 243885115 00:36:04.711 1 links removed 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@33 -- # sn=712555340 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 712555340 00:36:04.711 1 links removed 00:36:04.711 12:48:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 669620 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 669620 ']' 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 669620 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 669620 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 669620' 00:36:04.711 killing process with pid 669620 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@967 -- # kill 669620 00:36:04.711 Received shutdown signal, test time was about 1.000000 seconds 00:36:04.711 00:36:04.711 Latency(us) 00:36:04.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:04.711 =================================================================================================================== 00:36:04.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:04.711 12:48:37 keyring_linux -- common/autotest_common.sh@972 -- # wait 669620 00:36:04.711 12:48:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 669353 00:36:04.711 12:48:38 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 669353 ']' 00:36:04.711 12:48:38 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 669353 00:36:04.711 12:48:38 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:04.711 12:48:38 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:04.711 12:48:38 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 669353 00:36:04.972 12:48:38 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:04.972 12:48:38 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:04.972 12:48:38 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 669353' 00:36:04.972 killing process with pid 669353 00:36:04.972 12:48:38 keyring_linux -- common/autotest_common.sh@967 -- # kill 669353 00:36:04.972 12:48:38 keyring_linux -- common/autotest_common.sh@972 -- # wait 669353 00:36:04.972 00:36:04.972 real 0m5.364s 00:36:04.972 user 0m10.245s 00:36:04.972 sys 0m1.376s 00:36:04.972 12:48:38 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:04.972 12:48:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:04.972 ************************************ 00:36:04.972 END TEST keyring_linux 00:36:04.972 ************************************ 00:36:05.233 12:48:38 -- common/autotest_common.sh@1142 -- # return 0 00:36:05.233 12:48:38 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:05.233 12:48:38 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:05.233 12:48:38 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:05.233 12:48:38 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:05.233 12:48:38 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:05.233 12:48:38 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:05.233 12:48:38 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:05.233 12:48:38 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:05.233 12:48:38 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:05.233 12:48:38 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:05.233 12:48:38 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:05.233 12:48:38 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:05.233 12:48:38 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:05.233 12:48:38 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:05.233 12:48:38 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:05.233 12:48:38 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:05.233 12:48:38 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:05.233 12:48:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:05.233 12:48:38 -- common/autotest_common.sh@10 -- # set +x 00:36:05.233 12:48:38 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:05.233 12:48:38 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:05.233 12:48:38 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:05.233 12:48:38 -- common/autotest_common.sh@10 -- # set +x 00:36:11.823 INFO: APP EXITING 00:36:11.823 INFO: killing all VMs 00:36:11.823 INFO: killing vhost app 00:36:11.823 INFO: EXIT DONE 00:36:16.031 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:65:00.0 (8086 0a54): Already using the nvme driver 00:36:16.031 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:36:16.031 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:36:16.292 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:36:16.292 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:36:20.504 Cleaning 00:36:20.504 Removing: /var/run/dpdk/spdk0/config 00:36:20.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:20.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:20.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:20.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:20.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:20.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:20.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:20.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:20.504 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:20.504 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:20.504 Removing: /var/run/dpdk/spdk1/config 00:36:20.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:20.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:20.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:20.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:20.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:20.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:20.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:20.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:20.504 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:20.504 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:20.504 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:20.504 Removing: /var/run/dpdk/spdk2/config 00:36:20.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:20.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:20.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:20.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:20.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:20.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:20.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:20.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:20.504 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:20.504 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:20.504 Removing: /var/run/dpdk/spdk3/config 00:36:20.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:20.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:20.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:20.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:20.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:20.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:20.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:20.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:20.504 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:20.504 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:20.504 Removing: /var/run/dpdk/spdk4/config 00:36:20.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:20.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:20.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:20.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:20.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:20.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:20.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:20.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:20.504 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:20.504 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:20.504 Removing: /dev/shm/bdev_svc_trace.1 00:36:20.504 Removing: /dev/shm/nvmf_trace.0 00:36:20.504 Removing: /dev/shm/spdk_tgt_trace.pid215620 00:36:20.504 Removing: /var/run/dpdk/spdk0 00:36:20.504 Removing: /var/run/dpdk/spdk1 00:36:20.504 Removing: /var/run/dpdk/spdk2 00:36:20.504 Removing: /var/run/dpdk/spdk3 00:36:20.504 Removing: /var/run/dpdk/spdk4 00:36:20.504 Removing: /var/run/dpdk/spdk_pid211681 00:36:20.504 Removing: /var/run/dpdk/spdk_pid213399 00:36:20.504 Removing: /var/run/dpdk/spdk_pid215620 00:36:20.504 Removing: /var/run/dpdk/spdk_pid216146 00:36:20.504 Removing: /var/run/dpdk/spdk_pid217091 00:36:20.504 Removing: /var/run/dpdk/spdk_pid217404 00:36:20.504 Removing: /var/run/dpdk/spdk_pid218371 00:36:20.505 Removing: /var/run/dpdk/spdk_pid218405 00:36:20.505 Removing: /var/run/dpdk/spdk_pid218795 00:36:20.505 Removing: /var/run/dpdk/spdk_pid220459 00:36:20.505 Removing: /var/run/dpdk/spdk_pid221790 00:36:20.505 Removing: /var/run/dpdk/spdk_pid222146 00:36:20.505 Removing: /var/run/dpdk/spdk_pid222497 00:36:20.505 Removing: /var/run/dpdk/spdk_pid222878 00:36:20.505 Removing: /var/run/dpdk/spdk_pid223135 00:36:20.505 Removing: /var/run/dpdk/spdk_pid223291 00:36:20.505 Removing: /var/run/dpdk/spdk_pid223600 00:36:20.505 Removing: /var/run/dpdk/spdk_pid223943 00:36:20.505 Removing: /var/run/dpdk/spdk_pid224916 00:36:20.505 Removing: /var/run/dpdk/spdk_pid227896 00:36:20.505 Removing: /var/run/dpdk/spdk_pid228234 00:36:20.505 Removing: /var/run/dpdk/spdk_pid228588 00:36:20.505 Removing: /var/run/dpdk/spdk_pid228871 00:36:20.505 Removing: /var/run/dpdk/spdk_pid229215 00:36:20.505 Removing: /var/run/dpdk/spdk_pid229219 00:36:20.505 Removing: /var/run/dpdk/spdk_pid229561 00:36:20.505 Removing: /var/run/dpdk/spdk_pid229861 00:36:20.505 Removing: /var/run/dpdk/spdk_pid229932 00:36:20.505 Removing: /var/run/dpdk/spdk_pid230209 00:36:20.505 Removing: /var/run/dpdk/spdk_pid230271 00:36:20.505 Removing: /var/run/dpdk/spdk_pid230551 00:36:20.505 Removing: /var/run/dpdk/spdk_pid230977 00:36:20.505 Removing: /var/run/dpdk/spdk_pid231290 00:36:20.505 Removing: /var/run/dpdk/spdk_pid231645 00:36:20.505 Removing: /var/run/dpdk/spdk_pid231986 00:36:20.505 Removing: /var/run/dpdk/spdk_pid232015 00:36:20.505 Removing: /var/run/dpdk/spdk_pid232084 00:36:20.505 Removing: /var/run/dpdk/spdk_pid232401 00:36:20.505 Removing: /var/run/dpdk/spdk_pid232716 00:36:20.505 Removing: /var/run/dpdk/spdk_pid232775 00:36:20.505 Removing: /var/run/dpdk/spdk_pid233076 00:36:20.505 Removing: /var/run/dpdk/spdk_pid233393 00:36:20.505 Removing: /var/run/dpdk/spdk_pid233674 00:36:20.505 Removing: /var/run/dpdk/spdk_pid233775 00:36:20.505 Removing: /var/run/dpdk/spdk_pid234068 00:36:20.505 Removing: /var/run/dpdk/spdk_pid234389 00:36:20.505 Removing: /var/run/dpdk/spdk_pid234706 00:36:20.505 Removing: /var/run/dpdk/spdk_pid234852 00:36:20.505 Removing: /var/run/dpdk/spdk_pid235071 00:36:20.505 Removing: /var/run/dpdk/spdk_pid235381 00:36:20.505 Removing: /var/run/dpdk/spdk_pid235711 00:36:20.505 Removing: /var/run/dpdk/spdk_pid235969 00:36:20.505 Removing: /var/run/dpdk/spdk_pid236083 00:36:20.505 Removing: /var/run/dpdk/spdk_pid236388 00:36:20.505 Removing: /var/run/dpdk/spdk_pid236713 00:36:20.505 Removing: /var/run/dpdk/spdk_pid237035 00:36:20.505 Removing: /var/run/dpdk/spdk_pid237204 00:36:20.505 Removing: /var/run/dpdk/spdk_pid237425 00:36:20.505 Removing: /var/run/dpdk/spdk_pid237781 00:36:20.505 Removing: /var/run/dpdk/spdk_pid242452 00:36:20.505 Removing: /var/run/dpdk/spdk_pid247876 00:36:20.505 Removing: /var/run/dpdk/spdk_pid259674 00:36:20.505 Removing: /var/run/dpdk/spdk_pid260293 00:36:20.505 Removing: /var/run/dpdk/spdk_pid265484 00:36:20.505 Removing: /var/run/dpdk/spdk_pid265806 00:36:20.765 Removing: /var/run/dpdk/spdk_pid271245 00:36:20.765 Removing: /var/run/dpdk/spdk_pid278240 00:36:20.765 Removing: /var/run/dpdk/spdk_pid281057 00:36:20.765 Removing: /var/run/dpdk/spdk_pid293872 00:36:20.765 Removing: /var/run/dpdk/spdk_pid305303 00:36:20.765 Removing: /var/run/dpdk/spdk_pid307104 00:36:20.765 Removing: /var/run/dpdk/spdk_pid308029 00:36:20.765 Removing: /var/run/dpdk/spdk_pid328244 00:36:20.765 Removing: /var/run/dpdk/spdk_pid333343 00:36:20.765 Removing: /var/run/dpdk/spdk_pid386187 00:36:20.765 Removing: /var/run/dpdk/spdk_pid392458 00:36:20.765 Removing: /var/run/dpdk/spdk_pid399344 00:36:20.765 Removing: /var/run/dpdk/spdk_pid407030 00:36:20.765 Removing: /var/run/dpdk/spdk_pid407035 00:36:20.765 Removing: /var/run/dpdk/spdk_pid407944 00:36:20.765 Removing: /var/run/dpdk/spdk_pid408859 00:36:20.766 Removing: /var/run/dpdk/spdk_pid409771 00:36:20.766 Removing: /var/run/dpdk/spdk_pid410243 00:36:20.766 Removing: /var/run/dpdk/spdk_pid410376 00:36:20.766 Removing: /var/run/dpdk/spdk_pid410532 00:36:20.766 Removing: /var/run/dpdk/spdk_pid410694 00:36:20.766 Removing: /var/run/dpdk/spdk_pid410705 00:36:20.766 Removing: /var/run/dpdk/spdk_pid411609 00:36:20.766 Removing: /var/run/dpdk/spdk_pid412516 00:36:20.766 Removing: /var/run/dpdk/spdk_pid413386 00:36:20.766 Removing: /var/run/dpdk/spdk_pid413838 00:36:20.766 Removing: /var/run/dpdk/spdk_pid413970 00:36:20.766 Removing: /var/run/dpdk/spdk_pid414149 00:36:20.766 Removing: /var/run/dpdk/spdk_pid415393 00:36:20.766 Removing: /var/run/dpdk/spdk_pid416638 00:36:20.766 Removing: /var/run/dpdk/spdk_pid425699 00:36:20.766 Removing: /var/run/dpdk/spdk_pid456730 00:36:20.766 Removing: /var/run/dpdk/spdk_pid462232 00:36:20.766 Removing: /var/run/dpdk/spdk_pid464007 00:36:20.766 Removing: /var/run/dpdk/spdk_pid465868 00:36:20.766 Removing: /var/run/dpdk/spdk_pid466103 00:36:20.766 Removing: /var/run/dpdk/spdk_pid466231 00:36:20.766 Removing: /var/run/dpdk/spdk_pid466518 00:36:20.766 Removing: /var/run/dpdk/spdk_pid467194 00:36:20.766 Removing: /var/run/dpdk/spdk_pid469094 00:36:20.766 Removing: /var/run/dpdk/spdk_pid470289 00:36:20.766 Removing: /var/run/dpdk/spdk_pid470812 00:36:20.766 Removing: /var/run/dpdk/spdk_pid473219 00:36:20.766 Removing: /var/run/dpdk/spdk_pid473875 00:36:20.766 Removing: /var/run/dpdk/spdk_pid474995 00:36:20.766 Removing: /var/run/dpdk/spdk_pid480132 00:36:20.766 Removing: /var/run/dpdk/spdk_pid492159 00:36:20.766 Removing: /var/run/dpdk/spdk_pid496183 00:36:20.766 Removing: /var/run/dpdk/spdk_pid503296 00:36:20.766 Removing: /var/run/dpdk/spdk_pid504665 00:36:20.766 Removing: /var/run/dpdk/spdk_pid506063 00:36:20.766 Removing: /var/run/dpdk/spdk_pid511474 00:36:20.766 Removing: /var/run/dpdk/spdk_pid516446 00:36:20.766 Removing: /var/run/dpdk/spdk_pid526073 00:36:20.766 Removing: /var/run/dpdk/spdk_pid526268 00:36:20.766 Removing: /var/run/dpdk/spdk_pid532149 00:36:20.766 Removing: /var/run/dpdk/spdk_pid532431 00:36:20.766 Removing: /var/run/dpdk/spdk_pid532682 00:36:20.766 Removing: /var/run/dpdk/spdk_pid533043 00:36:20.766 Removing: /var/run/dpdk/spdk_pid533048 00:36:20.766 Removing: /var/run/dpdk/spdk_pid538787 00:36:20.766 Removing: /var/run/dpdk/spdk_pid539250 00:36:20.766 Removing: /var/run/dpdk/spdk_pid544808 00:36:20.766 Removing: /var/run/dpdk/spdk_pid547554 00:36:21.027 Removing: /var/run/dpdk/spdk_pid553929 00:36:21.027 Removing: /var/run/dpdk/spdk_pid560445 00:36:21.027 Removing: /var/run/dpdk/spdk_pid570304 00:36:21.027 Removing: /var/run/dpdk/spdk_pid579267 00:36:21.027 Removing: /var/run/dpdk/spdk_pid579276 00:36:21.027 Removing: /var/run/dpdk/spdk_pid601664 00:36:21.027 Removing: /var/run/dpdk/spdk_pid602362 00:36:21.027 Removing: /var/run/dpdk/spdk_pid603140 00:36:21.027 Removing: /var/run/dpdk/spdk_pid603806 00:36:21.027 Removing: /var/run/dpdk/spdk_pid604776 00:36:21.027 Removing: /var/run/dpdk/spdk_pid605402 00:36:21.027 Removing: /var/run/dpdk/spdk_pid606020 00:36:21.027 Removing: /var/run/dpdk/spdk_pid606643 00:36:21.027 Removing: /var/run/dpdk/spdk_pid611820 00:36:21.027 Removing: /var/run/dpdk/spdk_pid612111 00:36:21.027 Removing: /var/run/dpdk/spdk_pid619163 00:36:21.027 Removing: /var/run/dpdk/spdk_pid619539 00:36:21.027 Removing: /var/run/dpdk/spdk_pid622276 00:36:21.027 Removing: /var/run/dpdk/spdk_pid631424 00:36:21.027 Removing: /var/run/dpdk/spdk_pid631429 00:36:21.027 Removing: /var/run/dpdk/spdk_pid638122 00:36:21.027 Removing: /var/run/dpdk/spdk_pid640097 00:36:21.027 Removing: /var/run/dpdk/spdk_pid642111 00:36:21.027 Removing: /var/run/dpdk/spdk_pid643187 00:36:21.027 Removing: /var/run/dpdk/spdk_pid645238 00:36:21.027 Removing: /var/run/dpdk/spdk_pid646564 00:36:21.027 Removing: /var/run/dpdk/spdk_pid656895 00:36:21.027 Removing: /var/run/dpdk/spdk_pid657479 00:36:21.027 Removing: /var/run/dpdk/spdk_pid658081 00:36:21.027 Removing: /var/run/dpdk/spdk_pid660751 00:36:21.027 Removing: /var/run/dpdk/spdk_pid661308 00:36:21.027 Removing: /var/run/dpdk/spdk_pid661966 00:36:21.027 Removing: /var/run/dpdk/spdk_pid667006 00:36:21.027 Removing: /var/run/dpdk/spdk_pid667293 00:36:21.027 Removing: /var/run/dpdk/spdk_pid668957 00:36:21.027 Removing: /var/run/dpdk/spdk_pid669353 00:36:21.027 Removing: /var/run/dpdk/spdk_pid669620 00:36:21.027 Clean 00:36:21.027 12:48:54 -- common/autotest_common.sh@1451 -- # return 0 00:36:21.027 12:48:54 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:21.027 12:48:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:21.027 12:48:54 -- common/autotest_common.sh@10 -- # set +x 00:36:21.287 12:48:54 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:21.287 12:48:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:21.287 12:48:54 -- common/autotest_common.sh@10 -- # set +x 00:36:21.287 12:48:54 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:21.287 12:48:54 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:21.287 12:48:54 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:21.287 12:48:54 -- spdk/autotest.sh@391 -- # hash lcov 00:36:21.287 12:48:54 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:21.287 12:48:54 -- spdk/autotest.sh@393 -- # hostname 00:36:21.287 12:48:54 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-CYP-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:21.548 geninfo: WARNING: invalid characters removed from testname! 00:36:48.130 12:49:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:54.754 12:49:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:02.891 12:49:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:09.499 12:49:41 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:16.080 12:49:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:22.659 12:49:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:29.240 12:50:02 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:29.240 12:50:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.240 12:50:02 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:29.240 12:50:02 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.240 12:50:02 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.240 12:50:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.240 12:50:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.240 12:50:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.240 12:50:02 -- paths/export.sh@5 -- $ export PATH 00:37:29.240 12:50:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.240 12:50:02 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:29.240 12:50:02 -- common/autobuild_common.sh@447 -- $ date +%s 00:37:29.240 12:50:02 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721904602.XXXXXX 00:37:29.240 12:50:02 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721904602.Okeaja 00:37:29.240 12:50:02 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:37:29.240 12:50:02 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:37:29.240 12:50:02 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:37:29.240 12:50:02 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:29.240 12:50:02 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:29.240 12:50:02 -- common/autobuild_common.sh@463 -- $ get_config_params 00:37:29.240 12:50:02 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:37:29.240 12:50:02 -- common/autotest_common.sh@10 -- $ set +x 00:37:29.240 12:50:02 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:37:29.240 12:50:02 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:37:29.240 12:50:02 -- pm/common@17 -- $ local monitor 00:37:29.240 12:50:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.240 12:50:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.240 12:50:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.240 12:50:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.240 12:50:02 -- pm/common@25 -- $ sleep 1 00:37:29.240 12:50:02 -- pm/common@21 -- $ date +%s 00:37:29.240 12:50:02 -- pm/common@21 -- $ date +%s 00:37:29.240 12:50:02 -- pm/common@21 -- $ date +%s 00:37:29.240 12:50:02 -- pm/common@21 -- $ date +%s 00:37:29.240 12:50:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721904602 00:37:29.240 12:50:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721904602 00:37:29.240 12:50:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721904602 00:37:29.240 12:50:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721904602 00:37:29.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721904602_collect-vmstat.pm.log 00:37:29.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721904602_collect-cpu-load.pm.log 00:37:29.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721904602_collect-cpu-temp.pm.log 00:37:29.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721904602_collect-bmc-pm.bmc.pm.log 00:37:30.443 12:50:03 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:37:30.443 12:50:03 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:37:30.443 12:50:03 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:30.443 12:50:03 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:30.443 12:50:03 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:30.443 12:50:03 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:30.443 12:50:03 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:30.443 12:50:03 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:30.443 12:50:03 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:30.443 12:50:03 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:30.443 12:50:03 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:30.443 12:50:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:30.443 12:50:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:30.443 12:50:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:30.443 12:50:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:30.443 12:50:03 -- pm/common@44 -- $ pid=681735 00:37:30.443 12:50:03 -- pm/common@50 -- $ kill -TERM 681735 00:37:30.443 12:50:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:30.443 12:50:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:30.443 12:50:03 -- pm/common@44 -- $ pid=681736 00:37:30.443 12:50:03 -- pm/common@50 -- $ kill -TERM 681736 00:37:30.443 12:50:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:30.443 12:50:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:30.443 12:50:03 -- pm/common@44 -- $ pid=681737 00:37:30.443 12:50:03 -- pm/common@50 -- $ kill -TERM 681737 00:37:30.443 12:50:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:30.443 12:50:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:30.443 12:50:03 -- pm/common@44 -- $ pid=681761 00:37:30.443 12:50:03 -- pm/common@50 -- $ sudo -E kill -TERM 681761 00:37:30.443 + [[ -n 93881 ]] 00:37:30.443 + sudo kill 93881 00:37:30.453 [Pipeline] } 00:37:30.463 [Pipeline] // stage 00:37:30.468 [Pipeline] } 00:37:30.479 [Pipeline] // timeout 00:37:30.484 [Pipeline] } 00:37:30.500 [Pipeline] // catchError 00:37:30.506 [Pipeline] } 00:37:30.523 [Pipeline] // wrap 00:37:30.527 [Pipeline] } 00:37:30.542 [Pipeline] // catchError 00:37:30.551 [Pipeline] stage 00:37:30.553 [Pipeline] { (Epilogue) 00:37:30.568 [Pipeline] catchError 00:37:30.570 [Pipeline] { 00:37:30.586 [Pipeline] echo 00:37:30.588 Cleanup processes 00:37:30.594 [Pipeline] sh 00:37:30.881 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:30.881 681839 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:30.881 682234 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:30.897 [Pipeline] sh 00:37:31.184 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:31.184 ++ grep -v 'sudo pgrep' 00:37:31.184 ++ awk '{print $1}' 00:37:31.184 + sudo kill -9 681839 00:37:31.199 [Pipeline] sh 00:37:31.489 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:43.779 [Pipeline] sh 00:37:44.068 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:44.068 Artifacts sizes are good 00:37:44.086 [Pipeline] archiveArtifacts 00:37:44.094 Archiving artifacts 00:37:44.283 [Pipeline] sh 00:37:44.568 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:44.581 [Pipeline] cleanWs 00:37:44.590 [WS-CLEANUP] Deleting project workspace... 00:37:44.590 [WS-CLEANUP] Deferred wipeout is used... 00:37:44.598 [WS-CLEANUP] done 00:37:44.599 [Pipeline] } 00:37:44.616 [Pipeline] // catchError 00:37:44.627 [Pipeline] sh 00:37:44.913 + logger -p user.info -t JENKINS-CI 00:37:44.921 [Pipeline] } 00:37:44.934 [Pipeline] // stage 00:37:44.938 [Pipeline] } 00:37:44.951 [Pipeline] // node 00:37:44.955 [Pipeline] End of Pipeline 00:37:44.971 Finished: SUCCESS